Science.gov

Sample records for integral benchmark archive

  1. Shielding Integral Benchmark Archive and Database (SINBAD)

    SciTech Connect

    Kirk, Bernadette Lugue; Grove, Robert E; Kodeli, I.; Sartori, Enrico; Gulliford, J.

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  2. Shielding integral benchmark archive and database (SINBAD)

    SciTech Connect

    Kirk, B.L.; Grove, R.E.; Kodeli, I.; Gulliford, J.; Sartori, E.

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  3. Recent accelerator experiments updates in Shielding INtegral Benchmark Archive Database (SINBAD)

    NASA Astrophysics Data System (ADS)

    Kodeli, I.; Sartori, E.; Kirk, B.

    2006-06-01

    SINBAD is an internationally established set of radiation shielding and dosimetry data relative to experiments relevant in reactor shielding, fusion blanket neutronics and accelerator shielding. In addition to the characterization of the radiation source, it describes shielding materials and instrumentation and the relevant detectors. The experimental results, be it dose, reaction rates or unfolded spectra are presented in tabular ASCII form that can easily be exported to different computer environments for further use. Most sets in SINBAD also contain the computer model used for the interpretation of the experiment and, where available, results from uncertainty analysis. The set of primary documents used for the benchmark compilation and evaluation are provided in computer readable form. SINBAD is available free of charge from RSICC and from the NEA Data Bank.

  4. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  5. The philosophy of benchmark testing a standards-based picture archiving and communications system.

    PubMed

    Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E

    1999-05-01

    The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS. PMID:10342251

  6. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  7. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  8. Benchmark integration test for the Advanced Integration Matrix (AIM)

    NASA Astrophysics Data System (ADS)

    Paul, H.; Labuda, L.

    The Advanced Integration Matrix (AIM) studies and solves systems-level integration issues for exploration missions beyond Low Earth Orbit (LEO) through the design and development of a ground-based facility for developing revolutionary integrated systems for joint human-robotic missions. This systems integration approach to addressing human capability barriers will yield validation of advanced concepts and technologies, establish baselines for further development, and help identify opportunities for system-level breakthroughs. Early ground-based testing of mission capability will identify successful system implementations and operations, hidden risks and hazards, unexpected system and operations interactions, mission mass and operational savings, and can evaluate solutions to requirements-driving questions; all of which will enable NASA to develop more effective, lower risk systems and more reliable cost estimates for future missions. This paper describes the first in the series of integration tests proposed for AIM (the Benchmark Test) which will bring in partners and technology, evaluate the study processes of the project, and develop metrics for success.

  9. Melcor benchmarking against integral severe fuel damage tests

    SciTech Connect

    Madni, I.K.

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  10. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  11. Integrating distributed data archives in seismology: the European Integrated waveform Data Archives (EIDA)

    NASA Astrophysics Data System (ADS)

    Sleeman, Reinoud; Hanka, Winfried; Clinton, John; van Eck, Torild; Trani, Luca

    2013-04-01

    ORFEUS is the non-profit foundation that coordinates and promotes digital broadband seismology in Europe. Since 1987 the ORFEUS Data Center (ODC) has been its jointly funded data center. However, within the last decade we have seen an exponential growth of high quality digital waveform data relevant for seismological and general geoscience research. In addition to the rapid expansion in number and density of broadband seismic networks this growth is fuelled by data collected from other sensor types (strong motion, short period) and deployment types (aftershock arrays, temporary field campaigns, OBS). As a consequence, ORFEUS revised its data archiving infrastructure and organization, a major component of this is the formal establishment of the European Integrated waveform Data Archives (EIDA). Within the NERIES and NERA EC projects GFZ has taken the lead in developing ArcLink as a tool to provide uniform access to distributed seismological waveform data archives. The new suite of software and services provides the technical basis of EIDA. To ensure that those developments will become sustainable, an EIDA group has been formed within ORFEUS. This founding group of EIDA nodes, formed in 2013, will be responsible for steering and maintaining the technical developments and organization of an effective operational distributed waveform data archive for seismology in Europe. The EIDA Founding nodes are: ODC/ORFEUS, GEOFON/GFZ/Germany, SED/Switzerland, RESIF/CNRS-INSU/France, INGV/Italy and BGR/Germany. These represent EIDA nodes that have committed themselves within ORFEUS to manage EIDA, that is, to maintain and develop EIDA into a stable sustainable research infrastructure. This task involves a number of challenges with regard to quality and metadata maintenance, but also to provide efficient and uncomplicated data access for users. This also includes effective global archive synchronization with developments within the International Federation of Digital Seismograph

  12. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  13. Montreal Archive of Sleep Studies: an open-access resource for instrument benchmarking and exploratory research.

    PubMed

    O'Reilly, Christian; Gosselin, Nadia; Carrier, Julie; Nielsen, Tore

    2014-12-01

    Manual processing of sleep recordings is extremely time-consuming. Efforts to automate this process have shown promising results, but automatic systems are generally evaluated on private databases, not allowing accurate cross-validation with other systems. In lacking a common benchmark, the relative performances of different systems are not compared easily and advances are compromised. To address this fundamental methodological impediment to sleep study, we propose an open-access database of polysomnographic biosignals. To build this database, whole-night recordings from 200 participants [97 males (aged 42.9 ± 19.8 years) and 103 females (aged 38.3 ± 18.9 years); age range: 18-76 years] were pooled from eight different research protocols performed in three different hospital-based sleep laboratories. All recordings feature a sampling frequency of 256 Hz and an electroencephalography (EEG) montage of 4-20 channels plus standard electro-oculography (EOG), electromyography (EMG), electrocardiography (ECG) and respiratory signals. Access to the database can be obtained through the Montreal Archive of Sleep Studies (MASS) website (http://www.ceams-carsm.ca/en/MASS), and requires only affiliation with a research institution and prior approval by the applicant's local ethical review board. Providing the research community with access to this free and open sleep database is expected to facilitate the development and cross-validation of sleep analysis automation systems. It is also expected that such a shared resource will be a catalyst for cross-centre collaborations on difficult topics such as improving inter-rater agreement on sleep stage scoring. PMID:24909981

  14. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  15. Dynamic Data Management Based on Archival Process Integration at the Centre for Environmental Data Archival

    NASA Astrophysics Data System (ADS)

    Conway, Esther; Waterfall, Alison; Pepler, Sam; Newey, Charles

    2015-04-01

    In this paper we decribe a business process modelling approach to the integration of exisiting archival activities. We provide a high level overview of existing practice and discuss how procedures can be extended and supported through the description of preservation state. The aim of which is to faciliate the dynamic controlled management of scientific data through its lifecycle. The main types of archival processes considered are: • Management processes that govern the operation of an archive. These management processes include archival governance (preservation state management, selection of archival candidates and strategic management) . • Operational processes that constitute the core activities of the archive which maintain the value of research assets. These operational processes are the acquisition, ingestion, deletion, generation of metadata and preservation actvities, • Supporting processes, which include planning, risk analysis and monitoring of the community/preservation environment. We then proceed by describing the feasability testing of extended risk management and planning procedures which integrate current practices. This was done through the CEDA Archival Format Audit which inspected British Atmospherics Data Centre and National Earth Observation Data Centre Archival holdings. These holdings are extensive, comprising of around 2PB of data and 137 million individual files which were analysed and characterised in terms of format based risk. We are then able to present an overview of the risk burden faced by a large scale archive attempting to maintain the usability of heterogeneous environmental data sets. We conclude by presenting a dynamic data management information model that is capable of describing the preservation state of archival holdings throughout the data lifecycle. We provide discussion of the following core model entities and their relationships: • Aspirational entities, which include Data Entity definitions and their associated

  16. AITAS : Assembly Integration Test data Archiving System

    NASA Astrophysics Data System (ADS)

    Meunier, J.-C.; Madec, F.; Vigan, A.; Nowak, M.; Irdis Team

    2012-09-01

    The aim of AITAS is to automatically archive and index data acquired from an instrument during test and validation phase. AITAS product has been built initially to fill the needs of the IRDIS-SPHERE (ESO-VLT) project to archive and organise data during the test phase. We have developed robust and secure tools to retrieve data from the acquisition workstation, build an archive and index data by keywords to provide search functionality among large amount of data. This import of data is done automatically after setting some configuration files. In addition APIs and GUI client have been developed in order to retrieve data from a generic interface and use them in test processing phase. The end user is able to select data and retrieve using any criteria listed in the metadata of the files. One main advantage of this system is that it is intrinsically generic so that it can be used in instrument project in astrophysical laboratories without any further modifications.

  17. Beyond Conventional Benchmarking: Integrating Ideal Visions, Strategic Planning, Reengineering, and Quality Management.

    ERIC Educational Resources Information Center

    Kaufman, Roger; Swart, William

    1995-01-01

    Discussion of quality management and approaches to organizational success focuses on benchmarking and the integration of other approaches including strategic planning, ideal visions, and reengineering. Topics include performance improvement; decision making; internal benchmarking; and quality targets for the organization, clients, and societal…

  18. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  19. Study on Integrated Pest Management for Libraries and Archives.

    ERIC Educational Resources Information Center

    Parker, Thomas A.

    This study addresses the problems caused by the major insect and rodent pests and molds and mildews in libraries and archives; the damage they do to collections; and techniques for their prevention and control. Guidelines are also provided for the development and initiation of an Integrated Pest Management program for facilities housing library…

  20. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    NASA Astrophysics Data System (ADS)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-02-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  1. Conflation and integration of archived geologic maps and associated uncertainties

    USGS Publications Warehouse

    Shoberg, Thomas G.

    2016-01-01

    Old, archived geologic maps are often available with little or no associated metadata. This creates special problems in terms of extracting their data to use with a modern database. This research focuses on some problems and uncertainties associated with conflating older geologic maps in regions where modern geologic maps are, as yet, non-existent as well as vertically integrating the conflated maps with layers of modern GIS data (in this case, The National Map of the U.S. Geological Survey). Ste. Genevieve County, Missouri was chosen as the test area. It is covered by six archived geologic maps constructed in the years between 1928 and 1994. Conflating these maps results in a map that is internally consistent with these six maps, is digitally integrated with hydrography, elevation and orthoimagery data, and has a 95% confidence interval useful for further data set integration.

  2. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-01

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes. PMID:26339862

  3. Reactor benchmarks and integral data testing and feedback into ENDF/B-VI

    SciTech Connect

    McKnight, R.D.; Williams, M.L.

    1992-11-01

    The role of integral data testing and its feedback into the ENDF/B evaluated nuclear data files are reviewed. The use of the CSEWG reactor benchmarks in the data testing process is discussed and selected results based on ENDF/B Version VI data are presented. Finally, recommendations are given to improve the implementation in future integral data testing of ENDF/B.

  4. Reactor benchmarks and integral data testing and feedback into ENDF/B-VI

    SciTech Connect

    McKnight, R.D. ); Williams, M.L. . Nuclear Science Center)

    1992-01-01

    The role of integral data testing and its feedback into the ENDF/B evaluated nuclear data files are reviewed. The use of the CSEWG reactor benchmarks in the data testing process is discussed and selected results based on ENDF/B Version VI data are presented. Finally, recommendations are given to improve the implementation in future integral data testing of ENDF/B.

  5. GOLIA: An INTEGRAL archive at INAF-IASF Milano

    NASA Astrophysics Data System (ADS)

    Paizis, A.; Mereghetti, S.; Götz, D.; Fiorini, M.; Gaber, M.; Regni Ponzeveroni, R.; Sidoli, L.; Vercellone, S.

    2013-02-01

    We present the archive of the INTEGRAL data developed and maintained at INAF-IASF Milano. The archive comprises all the public data currently available (revolutions 0026-1079, i.e., December 2002-August 2011). INTEGRAL data are downloaded from the ISDC Data Centre for Astrophysics, Geneva, on a regular basis as they become public and a customized analysis using the OSA 9.0 software package is routinely performed on the IBIS/ISGRI data. The scientific products include individual pointing images and the associated detected source lists in the 17-30, 30-50, 17-50 and 50-100 keV energy bands, as well as light-curves binned over 100 s in the 17-30 keV band for sources of interest. Dedicated scripts to handle such vast datasets and results have been developed. We make the analysis tools to build such an archive publicly available. The whole database (raw data and products) enables an easy access to the hard X-ray long-term behaviour of a large sample of sources.

  6. Improving Federal Education Programs through an Integrated Performance and Benchmarking System.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…

  7. Integrated manufacturing approach to attain benchmark team performance

    NASA Astrophysics Data System (ADS)

    Chen, Shau-Ron; Nguyen, Andrew; Naguib, Hussein

    1994-09-01

    A Self-Directed Work Team (SDWT) was developed to transfer a polyimide process module from the research laboratory to our wafer fab facility for applications in IC specialty devices. The SDWT implemented processes and tools based on the integration of five manufacturing strategies for continuous improvement. These were: Leadership Through Quality (LTQ), Total Productive Maintenance (TMP), Cycle Time Management (CTM), Activity-Based Costing (ABC), and Total Employee Involvement (TEI). Utilizing these management techniques simultaneously, the team achieved six sigma control of all critical parameters, increased Overall Equipment Effectiveness (OEE) from 20% to 90%, reduced cycle time by 95%, cut polyimide manufacturing cost by 70%, and improved its overall team member skill level by 33%.

  8. Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken

    2005-01-01

    The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.

  9. Integral Data Benchmark of HENDL2.0/MG Compared with Neutronics Shielding Experiments

    NASA Astrophysics Data System (ADS)

    Jiang, Jieqiong; Xu, Dezheng; Zheng, Shanliang; He, Zhaozhong; Hu, Yanglin; Li, Jingjing; Zou, Jun; Zeng, Qin; Chen, Mingliang; Wang, Minghuang

    2009-10-01

    HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.

  10. PH5 for integrating and archiving different data types

    NASA Astrophysics Data System (ADS)

    Azevedo, Steve; Hess, Derick; Beaudoin, Bruce

    2016-04-01

    PH5 is IRIS PASSCAL's file organization of HDF5 used for seismic data. The extensibility and portability of HDF5 allows the PH5 format to evolve and operate on a variety of platforms and interfaces. To make PH5 even more flexible, the seismic metadata is separated from the time series data in order to achieve gains in performance as well as ease of use and to simplify user interaction. This separation affords easy updates to metadata after the data are archived without having to access waveform data. To date, PH5 is currently used for integrating and archiving active source, passive source, and onshore-offshore seismic data sets with the IRIS Data Management Center (DMC). Active development to make PH5 fully compatible with FDSN web services and deliver StationXML is near completion. We are also exploring the feasibility of utilizing QuakeML for active seismic source representation. The PH5 software suite, PIC KITCHEN, comprises in-field tools that include data ingestion (e.g. RefTek format, SEG-Y, and SEG-D), meta-data management tools including QC, and a waveform review tool. These tools enable building archive ready data in-field during active source experiments greatly decreasing the time to produce research ready data sets. Once archived, our online request page generates a unique web form and pre-populates much of it based on the metadata provided to it from the PH5 file. The data requester then can intuitively select the extraction parameters as well as data subsets they wish to receive (current output formats include SEG-Y, SAC, mseed). The web interface then passes this on to the PH5 processing tools to generate the requested seismic data, and e-mail the requester a link to the data set automatically as soon as the data are ready. PH5 file organization was originally designed to hold seismic time series data and meta-data from controlled source experiments using RefTek data loggers. The flexibility of HDF5 has enabled us to extend the use of PH5 in several

  11. Acceptance testing of integrated picture archiving and communications systems.

    PubMed

    Lewis, T E; Horton, M C; Kinsey, T V; Shelton, P D

    1999-05-01

    An integrated picture archiving and communication system (PACS) is a large investment in both money and resources. With all of the components and systems contained in the PACS, a methodical set of protocols and procedures must be developed to test all aspects of the PACS within the short time allocated for contract compliance. For the Department of Defense (DoD), acceptance testing (AT) sets the protocols and procedures. Broken down into modules and test procedures that group like components and systems, the AT protocol maximizes the efficiency and thoroughness of testing all aspects of an integrated PACS. A standardized and methodical protocol reduces the probability of functionality or performance limitations being overlooked. The AT protocol allows complete PACS testing within the 30 days allocated by the digital imaging network (DIN)-PACS contract. AT shortcomings identified during the testing phase properly allows for resolution before complete acceptance of the system. This presentation will describe the evolution of the process, the components of the DoD AT protocol, the benefits of the AT process, and its significance to the successful implementation of a PACS. This is a US government work. There are no restrictions on its use. PMID:10342200

  12. Solution of the WFNDEC 2015 eddy current benchmark with surface integral equation method

    NASA Astrophysics Data System (ADS)

    Demaldent, Edouard; Miorelli, Roberto; Reboud, Christophe; Theodoulidis, Theodoros

    2016-02-01

    In this paper, a numerical solution of WFNDEC 2015 eddy current benchmark is presented. In particular, the Surface Integral Equation (SIE) method has been employed for numerically solving the benchmark problem. The SIE method represent an effective and efficient alternative to standard numerical solver like Finite Element Method (FEM) when electromagnetic fields need to be calculated in problems involving homogeneous media. The formulation of SIE method allows to properly solve the electromagnetic problem by meshing the surface of the media instead to the complete media volume as done in FEM. The surface meshing enables to describe the problem with a smaller number of unknowns with respect to FEM. This property is directly translated in an obvious gain in terms of CPU time efficiency.

  13. An Integrative Approach to Archival Outreach: A Case Study of Becoming Part of the Constituents' Community

    ERIC Educational Resources Information Center

    Rettig, Patricia J.

    2007-01-01

    Archival outreach, an essential activity for any repository, should focus on what constituents are already doing and capitalize on existing venues related to the repository's subject area. The Water Resources Archive at Colorado State University successfully undertook this integrative approach to outreach. Detailed in the article are outreach…

  14. Progress of Integral Experiments in Benchmark Fission Assemblies for a Blanket of Hybrid Reactor

    NASA Astrophysics Data System (ADS)

    Liu, R.; Zhu, T. H.; Yan, X. S.; Lu, X. X.; Jiang, L.; Wang, M.; Han, Z. J.; Wen, Z. W.; Lin, J. F.; Yang, Y. W.

    2014-04-01

    This article describes recent progress in integral neutronics experiments in benchmark fission assemblies for the blanket design in a hybrid reactor. The spherical assemblies consist of three layers of depleted uranium shells and several layers of polyethylene shells, separately. In the assemblies with centralizing the D-T neutron source, the plutonium production rates, uranium fission rates and leakage neutron spectra are measured. The measured results are compared to the calculated ones with the MCNP-4B code and ENDF/B-VI library data, available.

  15. Integration experiences and performance studies of A COTS parallel archive systems

    SciTech Connect

    Chen, Hsing-bung; Scott, Cody; Grider, Bary; Torres, Aaron; Turley, Milton; Sanchez, Kathy; Bremer, John

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future

  16. Integration experiments and performance studies of a COTS parallel archive system

    SciTech Connect

    Chen, Hsing-bung; Scott, Cody; Grider, Gary; Torres, Aaron; Turley, Milton; Sanchez, Kathy; Bremer, John

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of

  17. Picture Archiving and Communication System (PACS) implementation, integration & benefits in an integrated health system.

    PubMed

    Mansoori, Bahar; Erhard, Karen K; Sunshine, Jeffrey L

    2012-02-01

    The availability of the Picture Archiving and Communication System (PACS) has revolutionized the practice of radiology in the past two decades and has shown to eventually increase productivity in radiology and medicine. PACS implementation and integration may bring along numerous unexpected issues, particularly in a large-scale enterprise. To achieve a successful PACS implementation, identifying the critical success and failure factors is essential. This article provides an overview of the process of implementing and integrating PACS in a comprehensive health system comprising an academic core hospital and numerous community hospitals. Important issues are addressed, touching all stages from planning to operation and training. The impact of an enterprise-wide radiology information system and PACS at the academic medical center (four specialty hospitals), in six additional community hospitals, and in all associated outpatient clinics as well as the implications on the productivity and efficiency of the entire enterprise are presented. PMID:22212425

  18. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  19. Integral Reactor Physics Benchmarks - the International Criticality Safety Benchmark Evaluation Project (icsbep) and the International Reactor Physics Experiment Evaluation Project (irphep)

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair; Nigg, David W.; Sartori, Enrico

    2006-04-01

    Since the beginning of the nuclear industry, thousands of integral experiments related to reactor physics and criticality safety have been performed. Many of these experiments can be used as benchmarks for validation of calculational techniques and improvements to nuclear data. However, many were performed in direct support of operations and thus were not performed with a high degree of quality assurance and were not well documented. For years, common validation practice included the tedious process of researching integral experiment data scattered throughout journals, transactions, reports, and logbooks. Two projects have been established to help streamline the validation process and preserve valuable integral data: the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP). The two projects are closely coordinated to avoid duplication of effort and to leverage limited resources to achieve a common goal. A short history of these two projects and their common purpose are discussed in this paper. Accomplishments of the ICSBEP are highlighted and the future of the two projects outlined.

  20. ‘Wasteaware’ benchmark indicators for integrated sustainable waste management in cities

    SciTech Connect

    Wilson, David C.; Rodic, Ljiljana; Cowing, Michael J.; Velis, Costas A.; Whiteman, Andrew D.; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-15

    Highlights: • Solid waste management (SWM) is a key utility service, but data is often lacking. • Measuring their SWM performance helps a city establish priorities for action. • The Wasteaware benchmark indicators: measure both technical and governance aspects. • Have been developed over 5 years and tested in more than 50 cities on 6 continents. • Enable consistent comparison between cities and countries and monitoring progress. - Abstract: This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city’s performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat’s solid waste management in the World’s cities. The comprehensive analytical framework of a city’s solid waste management system is divided into two overlapping ‘triangles’ – one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised ‘Wasteaware’ set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both ‘hard’ physical components and ‘soft’ governance aspects; and in prioritising ‘next steps’ in developing a city’s solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators

  1. Fault tolerance techniques to assure data integrity in high-volume PACS image archives

    NASA Astrophysics Data System (ADS)

    He, Yutao; Huang, Lu J.; Valentino, Daniel J.; Wingate, W. Keith; Avizienis, Algirdas

    1995-05-01

    Picture archiving and communication systems (PACS) perform the systematic acquisition, archiving, and presentation of large quantities of radiological image and text data. In the UCLA Radiology PACS, for example, the volume of image data archived currently exceeds 2500 gigabytes. Furthermore, the distributed heterogeneous PACS is expected to have near real-time response, be continuously available, and assure the integrity and privacy of patient data. The off-the-shelf subsystems that compose the current PACS cannot meet these expectations; therefore fault tolerance techniques had to be incorporated into the system. This paper is to report our first-step efforts towards the goal and is organized as follows: First we discuss data integrity and identify fault classes under the PACS operational environment, then we describe auditing and accounting schemes developed for error-detection and analyze operational data collected. Finally, we outline plans for future research.

  2. MAO NAS of Ukraine Plate Archives: Towards the WFPDB Integration

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.; Golovnya, V. V.; Yizhakevych, E. M.; Kizyun, L. N.; Pakuliak, L. K.; Shatokhina, S. V.; Tsvetkov, M. K.; Tsvetkova, K. P.; Sergeev, A. V.

    2006-04-01

    The plate archives of the Main Astronomical Observatory (Golosyiv, Kyiv) includes about 85 000 plates which were taken for various astronomical projects in the period 1950-2005. Among them there are more than 60 000 plates containing stellar, planetary and active solar formations spectra and more than 20 000 of direct northern sky area plates (mostly with wide-field). The catalogues of these direct wide-field plates have been prepared in computer-readable form. Now they are reduced in the WFPDB format and included into the database.

  3. Students Teaching Texts to Students: Integrating LdL and Digital Archives

    ERIC Educational Resources Information Center

    Stymeist, David

    2015-01-01

    The arrival of the digital age has not only reshaped and refocused critical research in the humanities, but has provided real opportunities to innovate with pedagogy and classroom structure. This article describes the development of a new pedagogical model that integrates learning by teaching with student access to electronic archival resources.…

  4. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  5. Water level ingest, archive and processing system - an integral part of NOAA's tsunami database

    NASA Astrophysics Data System (ADS)

    McLean, S. J.; Mungov, G.; Dunbar, P. K.; Price, D. J.; Mccullough, H.

    2013-12-01

    The National Oceanic and Atmospheric Administration (NOAA), National Geophysical Data Center (NGDC) and collocated World Data Service for Geophysics (WDS) provides long-term archive, data management, and access to national and global tsunami data. Archive responsibilities include the NOAA Global Historical Tsunami event and runup database, damage photos, as well as other related hazards data. Beginning in 2008, NGDC was given the responsibility of archiving, processing and distributing all tsunami and hazards-related water level data collected from NOAA observational networks in a coordinated and consistent manner. These data include the Deep-ocean Assessment and Reporting of Tsunami (DART) data provided by the National Data Buoy Center (NDBC), coastal-tide-gauge data from the National Ocean Service (NOS) network and tide-gauge data from the two National Weather Service (NWS) Tsunami Warning Centers (TWCs) regional networks. Taken together, this integrated archive supports tsunami forecast, warning, research, mitigation and education efforts of NOAA and the Nation. Due to the variety of the water level data, the automatic ingest system was redesigned, along with upgrading the inventory, archive and delivery capabilities based on modern digital data archiving practices. The data processing system was also upgraded and redesigned focusing on data quality assessment in an operational manner. This poster focuses on data availability highlighting the automation of all steps of data ingest, archive, processing and distribution. Examples are given from recent events such as the October 2012 hurricane Sandy, the Feb 06, 2013 Solomon Islands tsunami, and the June 13, 2013 meteotsunami along the U.S. East Coast.

  6. Picture archiving and communications systems for integrated healthcare information solutions

    NASA Astrophysics Data System (ADS)

    Goldburgh, Mitchell M.; Glicksman, Robert A.; Wilson, Dennis L.

    1997-05-01

    The rapid and dramatic shifts within the US healthcare industry have created unprecedented needs to implement changes in the delivery systems. These changes must not only address the access to healthcare, but the costs of delivery, and outcomes reporting. The resulting vision to address these needs has been called the Integrated Healthcare Solution whose core is the Electronic Patient Record. The integration of information by itself is not the issue, nor will it address the challenges in front of the healthcare providers. The process and business of healthcare delivery must adopt, apply and expand its use of technology which can assist in re-engineering the tools for healthcare. Imaging is becoming a larger part of the practice of healthcare both as a recorder of health status and as a defensive record for gatekeepers of healthcare. It is thus imperative that imaging specialists adopt technology which competitively integrates them into the process, reduces the risk, and positively effects the outcome.

  7. An integrated picture archiving and communications system-radiology information system in a radiological department.

    PubMed

    Wiltgen, M; Gell, G; Graif, E; Stubler, S; Kainz, A; Pitzler, R

    1993-02-01

    In this report we present an integrated picture archiving and communication system (PACS)--radiology information system (RIS) which runs as part of the daily routine in the Department of Radiology at the University of Graz. Although the PACS and the RIS have been developed independently, the two systems are interfaced to ensure a unified and consistent long-term archive. The configuration connects four computer tomography scanners (one of them situated at a distance of 1 km), a magnetic resonance imaging scanner, a digital subtraction angiography unit, an evaluation console, a diagnostic console, an image display console, an archive with two optical disk drives, and several RIS terminals. The configuration allows the routine archiving of all examinations on optical disks independent of reporting. The management of the optical disks is performed by the RIS. Images can be selected for retrieval via the RIS by using patient identification or medical criteria. A special software process (PACS-MONITOR) enables the user to survey and manage image communication, archiving, and retrieval as well as to get information about the status of the system at any time and handle the different procedures in the PACS. The system is active 24 hours a day. To make the PACS operation as independent as possible from the permanent presence of a system manager (electronic data processing expert), a rule-based expert system (OPERAS; OPERating ASsistant) is in use to localize and eliminate malfunctions that occur during routine work. The PACS-RIS reduces labor and speeds access to images within radiology and clinical departments. PMID:8439578

  8. Impedance spectroscopy for detection of mold in archives with an integrated reference measurement

    NASA Astrophysics Data System (ADS)

    Papireddy Vinayaka, P.; Van Den Driesche, S.; Janssen, S.; Frodl, M.; Blank, R.; Cipriani, F.; Lang, W.; Vellekoop, M. J.

    2015-06-01

    In this work, we present a new miniaturized culture medium based sensor system where we apply an optical reference in an impedance measurement approach for the detection of mold in archives. The designed sensor comprises a chamber with pre-loaded culture medium which promotes the growth of archive mold species. Growth of mold is detected by measuring changes in the impedance of the culture medium caused due to increase in the pH (from 5.5 to 8) with integrated electrodes. Integration of the reference measurement helps in determining the sensitivity of the sensor. The colorimetric principle serves as a reference measurement that indicates a pH change after which further pH shifts can be determined using impedance measurement. In this context, some of the major archive mold species Eurotium amstelodami, Aspergillus penicillioides and Aspergillus restrictus have been successfully analyzed on-chip. Growth of Eurotium amstelodami shows a proportional impedance change of 10 % (12 chips tested) per day, with a sensitivity of 0.6 kΩ/pH unit.

  9. Land cover data from Landsat single-date archive imagery: an integrated classification approach

    NASA Astrophysics Data System (ADS)

    Bajocco, Sofia; Ceccarelli, Tomaso; Rinaldo, Simone; De Angelis, Antonella; Salvati, Luca; Perini, Luigi

    2012-10-01

    The analysis of land cover dynamics provides insight into many environmental problems. However, there are few data sources which can be used to derive consistent time series, remote sensing being one of the most valuable ones. Due to their multi-temporal and spatial coverage needs, such analysis is usually based on large land cover datasets, which requires automated, objective and repeatable procedures. The USGS Landsat archives provide free access to multispectral, high-resolution remotely sensed data starting from the mid-eighties; in many cases, however, only single date images are available. This paper suggests an objective approach for generating land cover information from 30m resolution and single date Landsat archive satellite imagery. A procedure was developed integrating pixel-based and object-oriented classifiers, which consists of the following basic steps: i) pre-processing of the satellite image, including radiance and reflectance calibration, texture analysis and derivation of vegetation indices, ii) segmentation of the pre-processed image, iii) its classification integrating both radiometric and textural properties. The integrated procedure was tested for an area in Sardinia Region, Italy, and compared with a purely pixel-based one. Results demonstrated that a better overall accuracy, evaluated against the available land cover cartography, was obtained with the integrated (86%) compared to the pixel-based classification (68%) at the first CORINE Land Cover level. The proposed methodology needs to be further tested for evaluating its trasferability in time (constructing comparable land cover time series) and space (for covering larger areas).

  10. An XMM-Newton Science Archive for next decade, and its integration into ESASky

    NASA Astrophysics Data System (ADS)

    Loiseau, N.; Baines, D.; Rodriguez, P.; Salgado, J.; Sarmiento, M.; Colomo, E.; Merin, B.; Giordano, F.; Racero, E.; Migliari, S.

    2016-06-01

    We will present a roadmap for the next decade improvements of the XMM-Newton Science Archive (XSA), as planned for an always faster and more user friendly access to all XMM-Newton data. This plan includes the integration of the Upper Limit server, an interactive visualization of EPIC and RGS spectra, on-the-fly data analysis, among other advanced features. Within this philosophy XSA is also being integrated into ESASky, the science-driven discovery portal for all the ESA Astronomy Missions. A first public beta release of the ESASky service has been already released at the end of 2015. It is currently featuring an interface for exploration of the multi-wavelength sky and for single and/or multiple target searches of science-ready data. The system offers progressive multi-resolution all-sky projections of full mission datasets using a new generation of HEALPix projections called HiPS, developed at the CDS; detailed geometrical footprints to connect the all-sky mosaics to individual observations; and direct access to science-ready data at the underlying mission-specific science archives. New XMM-Newton EPIC and OM all-sky HiPS maps, catalogues and links to the observations are available through ESASky, together with INTEGRAL, HST, Herschel, Planck and other future data.

  11. Three Dimensional, Integrated Characterization and Archival System for Remote Facility Contaminant Characterization

    SciTech Connect

    Barry, R.E.; Gallman, P.; Jarvis, G.; Griffiths, P.

    1999-04-25

    The largest problem facing the Department of Energy's Office of Environmental Management (EM) is the cleanup of the Cold War legacy nuclear production plants that were built and operated from the mid-forties through the late eighties. EM is now responsible for the remediation of no less than 353 projects at 53 sites across the country at, an estimated cost of $147 billion over the next 72 years. One of the keys to accomplishing a thorough cleanup of any site is a rigorous but quick contaminant characterization capability. If the contaminants present in a facility can be mapped accurately, the cleanup can proceed with surgical precision, using appropriate techniques for each contaminant type and location. The three dimensional, integrated characterization and archival system (3D-ICAS) was developed for the purpose of rapid, field level identification, mapping, and archiving of contaminant data. The system consists of three subsystems, an integrated work and operating station, a 3-D coherent laser radar, and a contaminant analysis unit. Target contaminants that can be identified include chemical (currently organic only), radiological, and base materials (asbestos). In operation, two steps are required. First, the remotely operable 3-D laser radar maps an area of interest in the spatial domain. Second, the remotely operable contaminant analysis unit maps the area of interest in the chemical, radiological, and base material domains. The resultant information is formatted for display and archived using an integrated workstation. A 3-D model of the merged spatial and contaminant domains cart be displayed along with a color-coded contaminant tag at each analysis point. In addition, all of the supporting detailed data are archived for subsequent QC checks. The 3D-ICAS system is capable of performing all contaminant characterization in a dwell time of 6 seconds. The radiological and chemical sensors operate at US Environmental Protection Agency regulatory levels. Base

  12. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  13. Supporting users through integrated retrieval, processing, and distribution systems at the land processes distributed active archive center

    USGS Publications Warehouse

    Kalvelage, T.; Willems, Jennifer

    2003-01-01

    The design of the EOS Data and Information Systems (EOSDIS) to acquire, archive, manage and distribute Earth observation data to the broadest possible user community was discussed. A number of several integrated retrieval, processing and distribution capabilities have been explained. The value of these functions to the users were described and potential future improvements were laid out for the users. The users were interested in acquiring the retrieval, processing and archiving systems integrated so that they can get the data they want in the format and delivery mechanism of their choice.

  14. Benchmarking the OLGA lower-hybrid full-wave code for a future integration with ALOHA

    NASA Astrophysics Data System (ADS)

    Preinhaelter, J.; Hillairet, J.; Urban, J.

    2014-02-01

    The ALOHA [1] code is frequently used as a standard to solve the coupling of lower hybrid grills to the plasma. To remove its limitations on the linear density profile, homogeneous magnetic field and the fully decoupled fast and slow waves in the determination of the plasma surface admittance, we exploit the recently developed efficient full wave code OLGA [2]. There is simple connection between these two codes, namely, the plasma surface admittances used in ALOHA-2D can be expressed as the slowly varying parts of the coupling element integrands in OLGA and the ALOHA coupling elements are then linear combinations of OLGA coupling elements. We developed AOLGA module (subset of OLGA) for ALOHA. An extensive benchmark has been performed. ALOHA admittances differ from AOLGA results mainly for N∥in the inaccessible region but the coupling elements differ only slightly. We compare OLGA and ALOHA for a simple 10-waveguide grill operating at 3.7 GHz and the linear density profile as it is used in ALOHA. Hence we can detect pure effects of fast and slow waves coupling on grill efficiency. The effects are weak for parameters near the optimum coupling and confirm the ALOHA results validity. We also compare the effect of the plasma surface density and the density gradient on the grill coupling determined by OLGA and ALOHA.

  15. Evaluation for 4S core nuclear design method through integration of benchmark data

    SciTech Connect

    Nagata, A.; Tsuboi, Y.; Moriki, Y.; Kawashima, M.

    2012-07-01

    The 4S is a sodium-cooled small fast reactor which is reflector-controlled for operation through core lifetime about 30 years. The nuclear design method has been selected to treat neutron leakage with high accuracy. It consists of a continuous-energy Monte Carlo code, discrete ordinate transport codes and JENDL-3.3. These two types of neutronic analysis codes are used for the design in a complementary manner. The accuracy of the codes has been evaluated by analysis of benchmark critical experiments and the experimental reactor data. The measured data used for the evaluation is critical experimental data of the FCA XXIII as a physics mockup assembly of the 4S core, FCA XVI, FCA XIX, ZPR, and data of experimental reactor JOYO MK-1. Evaluated characteristics are criticality, reflector reactivity worth, power distribution, absorber reactivity worth, and sodium void worth. A multi-component bias method was applied, especially to improve the accuracy of sodium void reactivity worth. As the result, it has been confirmed that the 4S core nuclear design method provides good accuracy, and typical bias factors and their uncertainties are determined. (authors)

  16. Laser produced plasma sources for nanolithography—Recent integrated simulation and benchmarking

    SciTech Connect

    Hassanein, A.; Sizyuk, T.

    2013-05-15

    Photon sources for extreme ultraviolet lithography (EUVL) are still facing challenging problems to achieve high volume manufacturing in the semiconductor industry. The requirements for high EUV power, longer optical system and components lifetime, and efficient mechanisms for target delivery have narrowed investigators towards the development and optimization of dual-pulse laser sources with high repetition rate of small liquid tin droplets and the use of multi-layer mirror optical system for collecting EUV photons. We comprehensively simulated laser-produced plasma sources in full 3D configuration using 10–50 μm tin droplet targets as single droplets as well as, for the first time, distributed fragmented microdroplets with equivalent mass. The latter is to examine the effects of droplet fragmentation resulting from the first pulse and prior to the incident second main laser pulse. We studied the dependence of target mass and size, laser parameters, and dual pulse system configuration on EUV radiation output and on atomic and ionic debris generation. Our modeling and simulation included all phases of laser target evolution: from laser/droplet interaction, energy deposition, target vaporization, ionization, plasma hydrodynamic expansion, thermal and radiation energy redistribution, and EUV photons collection as well as detail mapping of photons source size and location. We also simulated and predicted the potential damage to the optical mirror collection system from plasma thermal and energetic debris and the requirements for mitigating systems to reduce debris fluence. The debris effect on mirror collection system is analyzed using our three-dimensional ITMC-DYN Monte Carlo package. Modeling results were benchmarked against our CMUXE laboratory experimental studies for the EUV photons production and for debris and ions generation.

  17. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  18. Three-Dimensional Integrated Characterization and Archiving System (3D-ICAS). Phase 1

    SciTech Connect

    1994-07-01

    3D-ICAS is being developed to support Decontamination and Decommissioning operations for DOE addressing Research Area 6 (characterization) of the Program Research and Development Announcement. 3D-ICAS provides in-situ 3-dimensional characterization of contaminated DOE facilities. Its multisensor probe contains a GC/MS (gas chromatography/mass spectrometry using noncontact infrared heating) sensor for organics, a molecular vibrational sensor for base material identification, and a radionuclide sensor for radioactive contaminants. It will provide real-time quantitative measurements of volatile organics and radionuclides on bare materials (concrete, asbestos, transite); it will provide 3-D display of the fusion of all measurements; and it will archive the measurements for regulatory documentation. It consists of two robotic mobile platforms that operate in hazardous environments linked to an integrated workstation in a safe environment.

  19. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  20. THREE DIMENSIONAL INTEGRATED CHARACTERIZATION AND ARCHIVING SYSTEM (3D-ICAS)

    SciTech Connect

    George Jarvis

    2001-06-18

    The overall objective of this project is to develop an integrated system that remotely characterizes, maps, and archives measurement data of hazardous decontamination and decommissioning (D&D) areas. The system will generate a detailed 3-dimensional topography of the area as well as real-time quantitative measurements of volatile organics and radionuclides. The system will analyze substrate materials consisting of concrete, asbestos, and transite. The system will permanently archive the data measurements for regulatory and data integrity documentation. Exposure limits, rest breaks, and donning and removal of protective garments generate waste in the form of contaminated protective garments and equipment. Survey times are increased and handling and transporting potentially hazardous materials incur additional costs. Off-site laboratory analysis is expensive and time-consuming, often necessitating delay of further activities until results are received. The Three Dimensional Integrated Characterization and Archiving System (3D-ICAS) has been developed to alleviate some of these problems. 3D-ICAS provides a flexible system for physical, chemical and nuclear measurements reduces costs and improves data quality. Operationally, 3D-ICAS performs real-time determinations of hazardous and toxic contamination. A prototype demonstration unit is available for use in early 2000. The tasks in this Phase included: (1) Mobility Platforms: Integrate hardware onto mobility platforms, upgrade surface sensors, develop unit operations and protocol. (2) System Developments: Evaluate metals detection capability using x-ray fluorescence technology. (3) IWOS Upgrades: Upgrade the IWOS software and hardware for compatibility with mobility platform. The system was modified, tested and debugged during 1999 and 2000. The 3D-ICAS was shipped on 11 May 2001 to FIU-HCET for demonstration and validation of the design modifications. These modifications included simplifying the design from a two

  1. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  2. Identification of Integral Benchmarks for Nuclear Data Testing Using DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments)

    SciTech Connect

    J. Blair Briggs; A. Nichole Ellis; Yolanda Rugama; Nicolas Soppera; Manuel Bossant

    2011-08-01

    Typical users of the International Criticality Safety Evaluation Project (ICSBEP) Handbook have specific criteria to which they desire to find matching experiments. Depending on the application, those criteria may consist of any combination of physical or chemical characteristics and/or various neutronic parameters. The ICSBEP Handbook contains a structured format helping the user narrow the search for experiments of interest. However, with nearly 4300 different experimental configurations and the ever increasing addition of experimental data, the necessity to perform multiple criteria searches have rendered these features insufficient. As a result, a relational database was created with information extracted from the ICSBEP Handbook. A users’ interface was designed by OECD and DOE to allow the interrogation of this database. The database and the corresponding users’ interface are referred to as DICE. DICE currently offers the capability to perform multiple criteria searches that go beyond simple fuel, physical form and spectra and includes expanded general information, fuel form, moderator/coolant, neutron-absorbing material, cladding, reflector, separator, geometry, benchmark results, spectra, and neutron balance parameters. DICE also includes the capability to display graphical representations of neutron spectra, detailed neutron balance, sensitivity coefficients for capture, fission, elastic scattering, inelastic scattering, nu-bar and mu-bar, as well as several other features.

  3. The cognitive processing of politics and politicians: archival studies of conceptual and integrative complexity.

    PubMed

    Suedfeld, Peter

    2010-12-01

    This article reviews over 30 years of research on the role of integrative complexity (IC) in politics. IC is a measure of the cognitive structure underlying information processing and decision making in a specific situation and time of interest to the researcher or policymaker. As such, it is a state counterpart of conceptual complexity, the trait (transsituationally and transtemporally stable) component of cognitive structure. In the beginning (the first article using the measure was published in 1976), most of the studies were by the author or his students (or both), notably Philip Tetlock; more recently, IC has attracted the attention of a growing number of political and social psychologists. The article traces the theoretical development of IC; describes how the variable is scored in archival or contemporary materials (speeches, interviews, memoirs, etc.); discusses possible influences on IC, such as stress, ideology, and official role; and presents findings on how measures of IC can be used to forecast political decisions (e.g., deciding between war and peace). Research on the role of IC in individual success and failure in military and political leaders is also described. PMID:21039528

  4. Virtual Globes and Glacier Research: Integrating research, collaboration, logistics, data archival, and outreach into a single tool

    NASA Astrophysics Data System (ADS)

    Nolan, M.

    2006-12-01

    Virtual Globes are a paradigm shift in the way earth sciences are conducted. With these tools, nearly all aspects of earth science can be integrated from field science, to remote sensing, to remote collaborations, to logistical planning, to data archival/retrieval, to PDF paper retriebal, to education and outreach. Here we present an example of how VGs can be fully exploited for field sciences, using research at McCall Glacier, in Arctic Alaska.

  5. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    PubMed

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. PMID:27088248

  6. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  7. DOE Integrated Safeguards and Security (DISS) historical document archival and retrieval analysis, requirements and recommendations

    SciTech Connect

    Guyer, H.B.; McChesney, C.A.

    1994-10-07

    The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage of arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.

  8. Integrating nTMS Data into a Radiology Picture Archiving System.

    PubMed

    Mäkelä, Teemu; Vitikainen, Anne-Mari; Laakso, Aki; Mäkelä, Jyrki P

    2015-08-01

    Navigated transcranial magnetic stimulation (nTMS) is employed in eloquent brain area localization prior to intraoperative direct cortical electrical stimulations and neurosurgery. No commercial archiving or file transfer protocol existed for these studies. The aim of our project was to establish a standardized protocol for the transfer of nTMS results and medical assessments to the end users in pursuance of improving data security and facilitating presurgical planning. The existing infrastructure of the hospital's Radiology Department was used. Hospital information systems and networks were configured to allow communications and archiving of the study results, and in-house software was written for file manipulations and transfers. Graphical user interface with description suggestions and user-defined text legends enabled an easy and straightforward workflow for annotations and archiving of the results. The software and configurations were implemented and have been applied in studies of ten patients. The creation of the study protocol required the involvement of various professionals and interdepartmental cooperation. The introduction of the protocol has ended previously recurrent involvement of staff in the file transfer phase and improved cost-effectiveness. PMID:25617092

  9. An overview on integrated data system for archiving and sharing marine geology and geophysical data in Korea Institute of Ocean Science & Technology (KIOST)

    NASA Astrophysics Data System (ADS)

    Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa

    2016-04-01

    We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.

  10. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  11. Integrating the IA2 Astronomical Archive in the VO: The VO-Dance Engine

    NASA Astrophysics Data System (ADS)

    Molinaro, M.; Laurino, O.; Smareglia, R.

    2012-09-01

    Virtual Observatory (VO) protocols and standards are getting mature and the astronomical community asks for astrophysical data to be easily reachable. This means data centers have to intensify their efforts to provide the data they manage not only through proprietary portals and services but also through interoperable resources developed on the basis of the IVOA (International Virtual Observatory Alliance) recommendations. Here we present the work and ideas developed at the IA2 (Italian Astronomical Archive) data center hosted by the INAF-OATs (Italian Institute for Astrophysics - Trieste Astronomical Observatory) to reach this goal. The core point is the development of an application that from existing DB and archive structures can translate their content to VO compliant resources: VO-Dance (written in Java). This application, in turn, relies on a database (potentially DBMS independent) to store the translation layer information of each resource and auxiliary content (UCDs, field names, authorizations, policies, etc.). The last token is an administrative interface (currently developed using the Django python framework) to allow the data center administrators to set up and maintain resources. This deployment, platform independent, with database and administrative interface highly customizable, means the package, when stable and easily distributable, can be also used by single astronomers or groups to set up their own resources from their public datasets.

  12. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  13. Harmonising and linking biomedical and clinical data across disparate data archives to enable integrative cross-biobank research

    PubMed Central

    Spjuth, Ola; Krestyaninova, Maria; Hastings, Janna; Shen, Huei-Yi; Heikkinen, Jani; Waldenberger, Melanie; Langhammer, Arnulf; Ladenvall, Claes; Esko, Tõnu; Persson, Mats-Åke; Heggland, Jon; Dietrich, Joern; Ose, Sandra; Gieger, Christian; Ried, Janina S; Peters, Annette; Fortier, Isabel; de Geus, Eco JC; Klovins, Janis; Zaharenko, Linda; Willemsen, Gonneke; Hottenga, Jouke-Jan; Litton, Jan-Eric; Karvanen, Juha; Boomsma, Dorret I; Groop, Leif; Rung, Johan; Palmgren, Juni; Pedersen, Nancy L; McCarthy, Mark I; van Duijn, Cornelia M; Hveem, Kristian; Metspalu, Andres; Ripatti, Samuli; Prokopenko, Inga; Harris, Jennifer R

    2016-01-01

    A wealth of biospecimen samples are stored in modern globally distributed biobanks. Biomedical researchers worldwide need to be able to combine the available resources to improve the power of large-scale studies. A prerequisite for this effort is to be able to search and access phenotypic, clinical and other information about samples that are currently stored at biobanks in an integrated manner. However, privacy issues together with heterogeneous information systems and the lack of agreed-upon vocabularies have made specimen searching across multiple biobanks extremely challenging. We describe three case studies where we have linked samples and sample descriptions in order to facilitate global searching of available samples for research. The use cases include the ENGAGE (European Network for Genetic and Genomic Epidemiology) consortium comprising at least 39 cohorts, the SUMMIT (surrogate markers for micro- and macro-vascular hard endpoints for innovative diabetes tools) consortium and a pilot for data integration between a Swedish clinical health registry and a biobank. We used the Sample avAILability (SAIL) method for data linking: first, created harmonised variables and then annotated and made searchable information on the number of specimens available in individual biobanks for various phenotypic categories. By operating on this categorised availability data we sidestep many obstacles related to privacy that arise when handling real values and show that harmonised and annotated records about data availability across disparate biomedical archives provide a key methodological advance in pre-analysis exchange of information between biobanks, that is, during the project planning phase. PMID:26306643

  14. Harmonising and linking biomedical and clinical data across disparate data archives to enable integrative cross-biobank research.

    PubMed

    Spjuth, Ola; Krestyaninova, Maria; Hastings, Janna; Shen, Huei-Yi; Heikkinen, Jani; Waldenberger, Melanie; Langhammer, Arnulf; Ladenvall, Claes; Esko, Tõnu; Persson, Mats-Åke; Heggland, Jon; Dietrich, Joern; Ose, Sandra; Gieger, Christian; Ried, Janina S; Peters, Annette; Fortier, Isabel; de Geus, Eco J C; Klovins, Janis; Zaharenko, Linda; Willemsen, Gonneke; Hottenga, Jouke-Jan; Litton, Jan-Eric; Karvanen, Juha; Boomsma, Dorret I; Groop, Leif; Rung, Johan; Palmgren, Juni; Pedersen, Nancy L; McCarthy, Mark I; van Duijn, Cornelia M; Hveem, Kristian; Metspalu, Andres; Ripatti, Samuli; Prokopenko, Inga; Harris, Jennifer R

    2016-04-01

    A wealth of biospecimen samples are stored in modern globally distributed biobanks. Biomedical researchers worldwide need to be able to combine the available resources to improve the power of large-scale studies. A prerequisite for this effort is to be able to search and access phenotypic, clinical and other information about samples that are currently stored at biobanks in an integrated manner. However, privacy issues together with heterogeneous information systems and the lack of agreed-upon vocabularies have made specimen searching across multiple biobanks extremely challenging. We describe three case studies where we have linked samples and sample descriptions in order to facilitate global searching of available samples for research. The use cases include the ENGAGE (European Network for Genetic and Genomic Epidemiology) consortium comprising at least 39 cohorts, the SUMMIT (surrogate markers for micro- and macro-vascular hard endpoints for innovative diabetes tools) consortium and a pilot for data integration between a Swedish clinical health registry and a biobank. We used the Sample avAILability (SAIL) method for data linking: first, created harmonised variables and then annotated and made searchable information on the number of specimens available in individual biobanks for various phenotypic categories. By operating on this categorised availability data we sidestep many obstacles related to privacy that arise when handling real values and show that harmonised and annotated records about data availability across disparate biomedical archives provide a key methodological advance in pre-analysis exchange of information between biobanks, that is, during the project planning phase. PMID:26306643

  15. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  16. CALIPSO Borehole Instrumentation Project at Soufriere Hills Volcano, Montserrat, BWI: Data Acquisition, Telemetry, Integration, and Archival Systems

    NASA Astrophysics Data System (ADS)

    Mattioli, G. S.; Linde, A. T.; Sacks, I. S.; Malin, P. E.; Shalev, E.; Elsworth, D.; Hidayat, D.; Voight, B.; Young, S. R.; Dunkley, P. N.; Herd, R.; Norton, G.

    2003-12-01

    The CALIPSO Project (Caribbean Andesite Lava Island-volcano Precision Seismo-geodetic Observatory) has greatly enhanced the monitoring and scientific infrastructure at the Soufriere Hills Volcano, Montserrat with the recent installation of an integrated array of borehole and surface geophysical instrumentation at four sites. Each site was designed to be sufficiently hardened to withstand extreme meteorological events (e.g. hurricanes) and only require minimum routine maintenance over an expected observatory lifespan of >30 y. The sensor package at each site includes: a single-component, very broad band, Sacks-Evertson strainmeter, a three-component seismometer ( ˜Hz to 1 kHz), a Pinnacle Technologies series 5000 tiltmeter, and a surface Ashtech u-Z CGPS station with choke ring antenna, SCIGN mount and radome. This instrument package is similar to that envisioned by the Plate Boundary Observatory for deployment on EarthScope target volcanoes in western North America and thus the CALIPSO Project may be considered a prototype PBO installation with real field testing on a very active and dangerous volcano. Borehole sites were installed in series and data acquisition began immediately after the sensors were grouted into position at 200 m depth, with the first completed at Trants (5.8 km from dome) in 12-02, then Air Studios (5.2 km), Geralds (9.4 km), and Olveston (7.0 km) in 3-03. Analog data from the strainmeter (50 Hz sync) and seismometer (200 Hz) were initially digitized and locally archived using RefTek 72A-07 data acquisition systems (DAS) on loan from the PASSCAL instrument pool. Data were downloaded manually to a laptop approximately every month from initial installation until August 2003, when new systems were installed. Approximately 0.2 Tb of raw data in SEGY format have already been acquired and are currently archived at UARK for analysis by the CALIPSO science team. The July 12th dome collapse and vulcanian explosion events were recorded at 3 of the 4

  17. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  19. Supporting users through integrated retrieval, processing, and distribution systems at the Land Processes Distributed Active Archive Center

    USGS Publications Warehouse

    Kalvelage, Thomas A.; Willems, Jennifer

    2005-01-01

    The LP DAAC is the primary archive for the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data; it is the only facility in the United States that archives, processes, and distributes data from the Advanced Spaceborne Thermal Emission/Reflection Radiometer (ASTER) on NASA's Terra spacecraft; and it is responsible for the archive and distribution of “land products” generated from data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra and Aqua satellites.

  20. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  1. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  2. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    SciTech Connect

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  3. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    SciTech Connect

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  4. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  5. The NASA Exoplanet Archive

    NASA Astrophysics Data System (ADS)

    Ramirez, Solange; Akeson, R. L.; Ciardi, D.; Kane, S. R.; Plavchan, P.; von Braun, K.; NASA Exoplanet Archive Team

    2013-01-01

    The NASA Exoplanet Archive is an online service that compiles and correlates astronomical information on extra solar planets and their host stars. The data in the archive include exoplanet parameters (such as orbits, masses, and radii), associated data (such as published radial velocity curves, photometric light curves, images, and spectra), and stellar parameters (such as magnitudes, positions, and temperatures). All the archived data are linked to the original literature reference.The archive provides tools to work with these data, including interactive tables (with plotting capabilities), interactive light curve viewer, periodogram service, transit and ephemeris calculator, and application program interface.The NASA Exoplanet Archive is the U.S. portal to the public CoRoT mission data for both the Exoplanet and Asteroseismology data sets. The NASA Exoplanet Archive also serves data related to Kepler Objects of Interest (Planet Candidates and the Kepler False Positives, KOI) in an integrated and interactive table containing stellar and transit parameters. In support of the Kepler Extended Mission, the NASA Exoplanet Archive will host transit modeling parameters, centroid results, several statistical values, and summary and detailed reports for all transit-like events identified by the Kepler Pipeline. To access this information visit us at: http://exoplanetarchive.ipac.caltech.edu

  6. Research Reactor Benchmarks

    SciTech Connect

    Ravnik, Matjaz; Jeraj, Robert

    2003-09-15

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given.

  7. Sensor to User - NASA/EOS Data for Coastal Zone Management Applications Developed from Integrated Analyses: Verification, Validation and Benchmark Report

    NASA Technical Reports Server (NTRS)

    Hall, Callie; Arnone, Robert

    2006-01-01

    The NASA Applied Sciences Program seeks to transfer NASA data, models, and knowledge into the hands of end-users by forming links with partner agencies and associated decision support tools (DSTs). Through the NASA REASoN (Research, Education and Applications Solutions Network) Cooperative Agreement, the Oceanography Division of the Naval Research Laboratory (NRLSSC) is developing new products through the integration of data from NASA Earth-Sun System assets with coastal ocean forecast models and other available data to enhance coastal management in the Gulf of Mexico. The recipient federal agency for this research effort is the National Oceanic and Atmospheric Administration (NOAA). The contents of this report detail the effort to further the goals of the NASA Applied Sciences Program by demonstrating the use of NASA satellite products combined with data-assimilating ocean models to provide near real-time information to maritime users and coastal managers of the Gulf of Mexico. This effort provides new and improved capabilities for monitoring, assessing, and predicting the coastal environment. Coastal managers can exploit these capabilities through enhanced DSTs at federal, state and local agencies. The project addresses three major issues facing coastal managers: 1) Harmful Algal Blooms (HABs); 2) hypoxia; and 3) freshwater fluxes to the coastal ocean. A suite of ocean products capable of describing Ocean Weather is assembled on a daily basis as the foundation for this semi-operational multiyear effort. This continuous realtime capability brings decision makers a new ability to monitor both normal and anomalous coastal ocean conditions with a steady flow of satellite and ocean model conditions. Furthermore, as the baseline data sets are used more extensively and the customer list increased, customer feedback is obtained and additional customized products are developed and provided to decision makers. Continual customer feedback and response with new improved

  8. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  9. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  10. The Planetary Archive

    NASA Astrophysics Data System (ADS)

    Penteado, Paulo F.; Trilling, David; Szalay, Alexander; Budavári, Tamás; Fuentes, César

    2014-11-01

    We are building the first system that will allow efficient data mining in the astronomical archives for observations of Solar System Bodies. While the Virtual Observatory has enabled data-intensive research making use of large collections of observations across multiple archives, Planetary Science has largely been denied this opportunity: most astronomical data services are built based on sky positions, and moving objects are often filtered out.To identify serendipitous observations of Solar System objects, we ingest the archive metadata. The coverage of each image in an archive is a volume in a 3D space (RA,Dec,time), which we can represent efficiently through a hierarchical triangular mesh (HTM) for the spatial dimensions, plus a contiguous time interval. In this space, an asteroid occupies a curve, which we determine integrating its orbit into the past. Thus when an asteroid trajectory intercepts the volume of an archived image, we have a possible observation of that body. Our pipeline then looks in the archive's catalog for a source with the corresponding coordinates, to retrieve its photometry. All these matches are stored into a database, which can be queried by object identifier.This database consists of archived observations of known Solar System objects. This means that it grows not only from the ingestion of new images, but also from the growth in the number of known objects. As new bodies are discovered, our pipeline can find archived observations where they could have been recorded, providing colors for these newly-found objects. This growth becomes more relevant with the new generation of wide-field surveys, particularly LSST.We also present one use case of our prototype archive: after ingesting the metadata for SDSS, 2MASS and GALEX, we were able to identify serendipitous observations of Solar System bodies in these 3 archives. Cross-matching these occurrences provided us with colors from the UV to the IR, a much wider spectral range than that

  11. Taming the "Beast": An Archival Management System Based on EAD

    ERIC Educational Resources Information Center

    Levine, Jennie A.; Evans, Jennifer; Kumar, Amit

    2006-01-01

    In April 2005, the University of Maryland Libraries launched "ArchivesUM" (www.lib.umd.edu/archivesum), an online database of finding aids for manuscript and archival collections using Encoded Archival Description (EAD). "ArchivesUM," however, is only the publicly available end-product of a much larger project-an integrated system that ties…

  12. An integrated multi-medial approach to cultural heritage conservation and documentation: from remotely-sensed lidar imaging to historical archive data

    NASA Astrophysics Data System (ADS)

    Raimondi, Valentina; Palombi, Lorenzo; Morelli, Annalisa; Chimenti, Massimo; Penoni, Sara; Dercks, Ute; Andreotti, Alessia; Bartolozzi, Giovanni; Bini, Marco; Bonaduce, Ilaria; Bracci, Susanna; Cantisani, Emma; Colombini, M. Perla; Cucci, Costanza; Fenelli, Laura; Galeotti, Monica; Malesci, Irene; Malquori, Alessandra; Massa, Emmanuela; Montanelli, Marco; Olmi, Roberto; Picollo, Marcello; Pierelli, Louis D.; Pinna, Daniela; Riminesi, Cristiano; Rutigliano, Sara; Sacchi, Barbara; Stella, Sergio; Tonini, Gabriella

    2015-10-01

    Fluorescence LIDAR imaging has been already proposed in several studies as a valuable technique for the remote diagnostics and documentation of the monumental surfaces, with main applications referring to the detection and classification of biodeteriogens, the characterization of lithotypes, the detection and characterization protective coatings and also of some types of pigments. However, the conservation and documentation of the cultural heritage is an application field where a highly multi-disciplinary, integrated approach is typically required. In this respect, the fluorescence LIDAR technique can be particularly useful to provide an overall assessment of the whole investigated surface, which can be profitably used to identify those specific areas in which further analytical measurements or sampling for laboratory analysis are needed. This paper presents some representative examples of the research carried out in the frame of the PRIMARTE project, with particular reference to the LIDAR data and their significance in conjunction with the other applied techniques. One of the major objectives of the project, actually, was the development of an integrated methodology for the combined use of data by using diverse techniques: from fluorescence LIDAR remote sensing to UV fluorescence and IR imaging, from IR thermography, georadar, 3D electric tomography to microwave reflectometry, from analytical techniques (FORS, FT-IR, GC-MS) to high resolution photo-documentation and historical archive studies. This method was applied to a 'pilot site', a chapel dating back to the fourteenth century, situated at 'Le Campora' site in the vicinity of Florence. All data have been integrated in a multi-medial tool for archiving, management, exploitation and dissemination purposes.

  13. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  14. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  15. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  16. Archiving Derrida

    ERIC Educational Resources Information Center

    Morris, Marla

    2003-01-01

    Derrida's archive, broadly speaking, is brilliantly mad, for he digs exegetically into the most difficult textual material and combines the most unlikely texts--from Socrates to Freud, from postcards to encyclopedias, from madness(es) to the archive, from primal scenes to death. In this paper, the author would like to do a brief study of the…

  17. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  18. A performance geodynamo benchmark

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  19. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  20. Seasonal Distributions and Migrations of Northwest Atlantic Swordfish: Inferences from Integration of Pop-Up Satellite Archival Tagging Studies

    PubMed Central

    Neilson, John D.; Loefer, Josh; Prince, Eric D.; Royer, François; Calmettes, Beatriz; Gaspar, Philippe; Lopez, Rémy; Andrushchenko, Irene

    2014-01-01

    Data sets from three laboratories conducting studies of movements and migrations of Atlantic swordfish (Xiphias gladius) using pop-up satellite archival tags were pooled, and processed using a common methodology. From 78 available deployments, 38 were selected for detailed examination based on deployment duration. The points of deployment ranged from southern Newfoundland to the Straits of Florida. The aggregate data comprise the most comprehensive information describing migrations of swordfish in the Atlantic. Challenges in using data from different tag manufacturers are discussed. The relative utility of geolocations obtained with light is compared with results derived from temperature information for this deep-diving species. The results show that fish tagged off North America remain in the western Atlantic throughout their deployments. This is inconsistent with the model of stock structure used in assessments conducted by the International Commission for the Conservation of Atlantic Tunas, which assumes that fish mix freely throughout the North Atlantic. PMID:25401964

  1. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  2. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  3. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  4. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  5. Data archiving

    NASA Technical Reports Server (NTRS)

    Pitts, David

    1991-01-01

    The viewgraphs of a discussion on data archiving presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. The mass storage system at the National Center for Atmospheric Research (NCAR) is described. Topics covered in the presentation include product goals, data library systems (DLS), client system commands, networks, archival devices, DLS features, client application systems, multiple mass storage devices, and system growth.

  6. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  7. Integration of temporal subtraction and nodule detection system for digital chest radiographs into picture archiving and communication system (PACS): four-year experience.

    PubMed

    Sakai, Shuji; Yabuuchi, Hidetake; Matsuo, Yoshio; Okafuji, Takashi; Kamitani, Takeshi; Honda, Hiroshi; Yamamoto, Keiji; Fujiwara, Keiichi; Sugiyama, Naoki; Doi, Kunio

    2008-03-01

    Since May 2002, temporal subtraction and nodule detection systems for digital chest radiographs have been integrated into our hospital's picture archiving and communication systems (PACS). Image data of digital chest radiographs were stored in PACS with the digital image and communication in medicine (DICOM) protocol. Temporal subtraction and nodule detection images were produced automatically in an exclusive server and delivered with current and previous images to the work stations. The problems that we faced and the solutions that we arrived at were analyzed. We encountered four major problems. The first problem, as a result of the storage of the original images' data with the upside-down, reverse, or lying-down positioning on portable chest radiographs, was solved by postponing the original data storage for 30 min. The second problem, the variable matrix sizes of chest radiographs obtained with flat-panel detectors (FPDs), was solved by improving the computer algorithm to produce consistent temporal subtraction images. The third problem, the production of temporal subtraction images of low quality, could not be solved fundamentally when the original images were obtained with different modalities. The fourth problem, an excessive false-positive rate on the nodule detection system, was solved by adjusting this system to chest radiographs obtained in our hospital. Integration of the temporal subtraction and nodule detection system into our hospital's PACS was customized successfully; this experience may be helpful to other hospitals. PMID:17333415

  8. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  9. Strategy of DIN-PACS benchmark testing

    NASA Astrophysics Data System (ADS)

    Norton, Gary S.; Lyche, David K.; Richardson, Nancy E.; Thomas, Jerry A.; Romlein, John R.; Cawthon, Michael A.; Lawrence, David P.; Shelton, Philip D.; Parr, Laurence F.; Richardson, Ronald R., Jr.; Johnson, Steven L.

    1998-07-01

    The Digital Imaging Network -- Picture Archive and Communication System (DIN-PACS) procurement is the Department of Defense's (DoD) effort to bring military medical treatment facilities into the twenty-first century with nearly filmless digital radiology departments. The DIN-PACS procurement is unique from most of the previous PACS acquisitions in that the Request for Proposals (RFP) required extensive benchmark testing prior to contract award. The strategy for benchmark testing was a reflection of the DoD's previous PACS and teleradiology experiences. The DIN-PACS Technical Evaluation Panel (TEP) consisted of DoD and civilian radiology professionals with unique clinical and technical PACS expertise. The TEP considered nine items, key functional requirements to the DIN-PACS acquisition: (1) DICOM Conformance, (2) System Storage and Archive, (3) Workstation Performance, (4) Network Performance, (5) Radiology Information System (RIS) functionality, (6) Hospital Information System (HIS)/RIS Interface, (7) Teleradiology, (8) Quality Control, and (9) System Reliability. The development of a benchmark test to properly evaluate these key requirements would require the TEP to make technical, operational, and functional decisions that had not been part of a previous PACS acquisition. Developing test procedures and scenarios that simulated inputs from radiology modalities and outputs to soft copy workstations, film processors, and film printers would be a major undertaking. The goals of the TEP were to fairly assess each vendor's proposed system and to provide an accurate evaluation of each system's capabilities to the source selection authority, so the DoD could purchase a PACS that met the requirements in the RFP.

  10. The FTIO Benchmark

    NASA Technical Reports Server (NTRS)

    Fagerstrom, Frederick C.; Kuszmaul, Christopher L.; Woo, Alex C. (Technical Monitor)

    1999-01-01

    We introduce a new benchmark for measuring the performance of parallel input/ouput. This benchmark has flexible initialization. size. and scaling properties that allows it to satisfy seven criteria for practical parallel I/O benchmarks. We obtained performance results while running on the a SGI Origin2OOO computer with various numbers of processors: with 4 processors. the performance was 68.9 Mflop/s with 0.52 of the time spent on I/O, with 8 processors the performance was 139.3 Mflop/s with 0.50 of the time spent on I/O, with 16 processors the performance was 173.6 Mflop/s with 0.43 of the time spent on I/O. and with 32 processors the performance was 259.1 Mflop/s with 0.47 of the time spent on I/O.

  11. Benchmarking. It's the future.

    PubMed

    Fazzi, Robert A; Agoglia, Robert V; Harlow, Lynn

    2002-11-01

    You can't go to a state conference, read a home care publication or log on to an Internet listserv ... without hearing or reading someone ... talk about benchmarking. What are your average case mix weights? How many visits are your nurses averaging per day? What is your average caseload for full time nurses in the field? What is your profit or loss per episode? The benchmark systems now available in home care potentially can serve as an early warning and partial protection for agencies. Agencies can collect data, analyze the outcomes, and through comparative benchmarking, determine where they are competitive and where they need to improve. These systems clearly provide agencies with the opportunity to be more proactive. PMID:12436898

  12. Accelerated randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Cory, D. G.

    2015-01-01

    Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.

  13. Moving the Archivist Closer to the Creator: Implementing Integrated Archival Policies for Born Digital Photography at Colleges and Universities

    ERIC Educational Resources Information Center

    Keough, Brian; Wolfe, Mark

    2012-01-01

    This article discusses integrated approaches to the management and preservation of born digital photography. It examines the changing practices among photographers, and the needed relationships between the photographers using digital technology and the archivists responsible for acquiring their born digital images. Special consideration is given…

  14. Radiation Embrittlement Archive Project

    SciTech Connect

    Klasky, Hilda B; Bass, Bennett Richard; Williams, Paul T; Phillips, Rick; Erickson, Marjorie A; Kirk, Mark T; Stevens, Gary L

    2013-01-01

    The Radiation Embrittlement Archive Project (REAP), which is being conducted by the Probabilistic Integrity Safety Assessment (PISA) Program at Oak Ridge National Laboratory under funding from the U.S. Nuclear Regulatory Commission s (NRC) Office of Nuclear Regulatory Research, aims to provide an archival source of information about the effect of neutron radiation on the properties of reactor pressure vessel (RPV) steels. Specifically, this project is an effort to create an Internet-accessible RPV steel embrittlement database. The project s website, https://reap.ornl.gov, provides information in two forms: (1) a document archive with surveillance capsule(s) reports and related technical reports, in PDF format, for the 104 commercial nuclear power plants (NPPs) in the United States, with similar reports from other countries; and (2) a relational database archive with detailed information extracted from the reports. The REAP project focuses on data collected from surveillance capsule programs for light-water moderated, nuclear power reactor vessels operated in the United States, including data on Charpy V-notch energy testing results, tensile properties, composition, exposure temperatures, neutron flux (rate of irradiation damage), and fluence, (Fast Neutron Fluence a cumulative measure of irradiation for E>1 MeV). Additionally, REAP contains data from surveillance programs conducted in other countries. REAP is presently being extended to focus on embrittlement data analysis, as well. This paper summarizes the current status of the REAP database and highlights opportunities to access the data and to participate in the project.

  15. Archiving TNG Data

    NASA Astrophysics Data System (ADS)

    Pasian, Fabio

    The TNG (Telescopio Nazionale Galileo), a 3.5 meter telescope derived from ESO's NTT which will see first light in La Palma during 1996, will be one of the first cases where operations will be carried out following an end-to-end data management scheme. An archive of both technical and scientific data will be produced directly at the telescope as a natural extension of the data handling chain. This is possible thanks to the total integration of the data management facilities with the telescope control system. In this paper, the archive system at the TNG is described in terms of archiving facilities, production of hard media and exportable database tables, on-line technical, calibration and transit archives, interaction with the quick-look utilities for the different instruments, and data access and retrieval mechanisms. The interfaces of the system with other TNG subsystems are discussed, and first results obtained testing a prototype implementation with a simulated data flow are shown.

  16. Software Archive Related Issues

    NASA Technical Reports Server (NTRS)

    Angelini, Lorella

    2008-01-01

    With the archive opening of the major X-ray and Gamma ray missions, the school is intended to provide information on the resource available in the data archive and the public software. This talk reviews the archive content, the data format for the major active missions Chandra, XMM-Newton, Swift, RXTE, Integral and Suzaku and the available software for each of these missions. It will explain the FITS format in general and the specific layout for the most popular mission, explaining the role of keywords and how they fit in the multimission standard approach embrace by the High Energy Community. Specifically, it reviews : the difference data levels and the difference software applicable; the popular/standard method of analysis for high level products such as spectra, timing and images; the role of calibration in the multi mission approach; how to navigate the archive query databases. It will present also how the school is organized and how the information provided will be relevant to each of the afternoon science projects that will be proposed to the students and led by a project leader

  17. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  18. Issues surrounding PACS archiving to external, third-party DICOM archives.

    PubMed

    Langer, Steve

    2009-03-01

    In larger health care imaging institutions, it is becoming increasingly obvious that separate image archives for every department are not cost effective or scalable. The solution is to have each department's picture archiving communication system (PACS) have only a local cache, and archive to an enterprise archive that drives a universal clinical viewer. It sounds simple, but how many PACS can truly work with a third-party Integration of the Health Care Enterprise Compliant Image Archive? The answer is somewhat disappointing. PMID:18449605

  19. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  20. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  1. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  2. Changes in Benchmarked Training.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Cheney, Scott

    1996-01-01

    Comparisons of the training practices of large companies confirm that the delivery and financing of training is changing rapidly. Companies in the American Society for Training and Development Benchmarking Forum are delivering less training with permanent staff and more with strategic use of technology, contract staff, and external providers,…

  3. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  4. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  5. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  6. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  7. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  8. [California State Archives.

    ERIC Educational Resources Information Center

    Rea, Jay W.

    The first paper on the California State Archives treats the administrative status, legal basis of the archives program, and organization of the archives program. The problem areas in this States' archival program are discussed at length. The second paper gives a crude sketch of the legal and administrative history of the California State Archives,…

  9. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  10. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  11. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  12. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  13. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  14. Sequoia Messaging Rate Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  15. MPI Multicore Linktest Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  16. Benchmarking the billing office.

    PubMed

    Woodcock, Elizabeth W; Williams, A Scott; Browne, Robert C; King, Gerald

    2002-09-01

    Benchmarking data related to human and financial resources in the billing process allows an organization to allocate its resources more effectively. Analyzing human resources used in the billing process helps determine cost-effective staffing. The deployment of human resources in a billing office affects timeliness of payment and ability to maximize revenue potential. Analyzing financial resource helps an organization allocate those resources more effectively. PMID:12235973

  17. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  18. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  19. From archives to picture archiving and communications systems.

    PubMed

    Verhelle, F; Van den Broeck, R; Osteaux, M

    1995-12-01

    Keeping organised and consistent film archives is a well-known problem in the radiological world. With the introduction of digital modalities (CT, MR,...) the idea of archiving the image data in a non common way was born. The aim is to keep the information in digital form from acquisition to destination, e.g. archives, viewing station, teleradiology, a task that was not as easy as some people believed, due to bare technical possibilities and to the lack of standards concerning medical image data. These reasons made it not so common to integrate components of different origins into a digital Picture Archiving and Communication environment. How should we attempt to integrate the analogue examinations? It is ridiculous to exclude the conventional XR-examination that accounts for more than 70% of the total production. We believe that there will be a migration to light-stimulable phosphor plates, but these are not yet user friendly and certainly not cost effective. We have similar problems of immature technology as we had for the digital modalities. In a first attempt the bridge can be crossed, between the two worlds by means of converters (laser scanner, CCD camera). PACS will become a reality in the future as almost all examinations will be digitalized. We are now in a transition period with its inconveniences, but we will gain a lot soon. The migration from piles of films through a computer assisted radiological archiving system to a full digital environment is sketched in a historical survey. PMID:8576029

  20. Collection of Neutronic VVER Reactor Benchmarks.

    Energy Science and Technology Software Center (ESTSC)

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  1. The new European Hubble archive

    NASA Astrophysics Data System (ADS)

    De Marchi, Guido; Arevalo, Maria; Merin, Bruno

    2016-01-01

    The European Hubble Archive (hereafter eHST), hosted at ESA's European Space Astronomy Centre, has been released for public use in October 2015. The eHST is now fully integrated with the other ESA science archives to ensure long-term preservation of the Hubble data, consisting of more than 1 million observations from 10 different scientific instruments. The public HST data, the Hubble Legacy Archive, and the high-level science data products are now all available to scientists through a single, carefully designed and user friendly web interface. In this talk, I will show how the the eHST can help boost archival research, including how to search on sources in the field of view thanks to precise footprints projected onto the sky, how to obtain enhanced previews of imaging data and interactive spectral plots, and how to directly link observations with already published papers. To maximise the scientific exploitation of Hubble's data, the eHST offers connectivity to virtual observatory tools, easily integrates with the recently released Hubble Source Catalog, and is fully accessible through ESA's archives multi-mission interface.

  2. Archiving tools for EOS

    NASA Astrophysics Data System (ADS)

    Sindrilaru, Elvin-Alin; Peters, Andreas-Joachim; Duellmann, Dirk

    2015-12-01

    Archiving data to tape is a critical operation for any storage system, especially for the EOS system at CERN which holds production data for all major LHC experiments. Each collaboration has an allocated quota it can use at any given time therefore, a mechanism for archiving "stale" data is needed so that storage space is reclaimed for online analysis operations. The archiving tool that we propose for EOS aims to provide a robust client interface for moving data between EOS and CASTOR (tape backed storage system) while enforcing best practices when it comes to data integrity and verification. All data transfers are done using a third-party copy mechanism which ensures point-to- point communication between the source and destination, thus providing maximum aggregate throughput. Using ZMQ message-passing paradigm and a process-based approach enabled us to achieve optimal utilisation of the resources and a stateless architecture which can easily be tuned during operation. The modular design and the implementation done in a high-level language like Python, has enabled us to easily extended the code base to address new demands like offering full and incremental backup capabilities.

  3. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  4. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  5. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  6. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  7. Algebraic Multigrid Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  8. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  9. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  10. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  11. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  12. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  13. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  14. Archives in Pakistan

    ERIC Educational Resources Information Center

    Haider, Syed Jalaluddin

    2004-01-01

    This article traces the origins and development of archives in Pakistan. The focus is on the National Archives of Pakistan, but also includes a discussion of the archival collections at the provincial and district levels. This study further examines the state of training facilities available to Pakistani archivists. Archival development has been…

  15. Reference Services in Archives.

    ERIC Educational Resources Information Center

    Whalen, Lucille; And Others

    1986-01-01

    This 16-article issue focuses on history, policy, services, users, organization, evaluation, and automation of the archival reference process. Collections at academic research libraries, a technical university, Board of Education, business archives, a bank, labor and urban archives, a manuscript repository, religious archives, and regional history…

  16. HEASARC Software Archive

    NASA Technical Reports Server (NTRS)

    White, Nicholas (Technical Monitor); Murray, Stephen S.

    2003-01-01

    multiple world coordinate systems, three dimensional event file binning, image smoothing, region groups and tags, the ability to save images in a number of image formats (such as JPEG, TIFF, PNG, FITS), improvements in support for integrating external analysis tools, and support for the virtual observatory. In particular, a full-featured web browser has been implemented within D S 9 . This provides support for full access to HEASARC archive sites such as SKYVIEW and W3BROWSE, in addition to other astronomical archives sites such as MAST, CHANDRA, ADS, NED, SIMBAD, IRAS, NVRO, SA0 TDC, and FIRST. From within DS9, the archives can be searched, and FITS images, plots, spectra, and journal abstracts can be referenced, downloaded and displayed The web browser provides the basis for the built-in help facility. All DS9 documentation, including the reference manual, FAQ, Know Features, and contact information is now available to the user without the need for external display applications. New versions of DS9 maybe downloaded and installed using this facility. Two important features used in the analysis of high energy astronomical data have been implemented in the past year. The first is support for binning photon event data in three dimensions. By binning the third dimension in time or energy, users are easily able to detect variable x-ray sources and identify other physical properties of their data. Second, a number of fast smoothing algorithms have been implemented in DS9, which allow users to smooth their data in real time. Algorithms for boxcar, tophat, and gaussian smoothing are supported.

  17. The ``One Archive'' for JWST

    NASA Astrophysics Data System (ADS)

    Greene, G.; Kyprianou, M.; Levay, K.; Sienkewicz, M.; Donaldson, T.; Dower, T.; Swam, M.; Bushouse, H.; Greenfield, P.; Kidwell, R.; Wolfe, D.; Gardner, L.; Nieto-Santisteban, M.; Swade, D.; McLean, B.; Abney, F.; Alexov, A.; Binegar, S.; Aloisi, A.; Slowinski, S.; Gousoulin, J.

    2015-09-01

    The next generation for the Space Telescope Science Institute data management system is gearing up to provide a suite of archive system services supporting the operation of the James Webb Space Telescope. We are now completing the initial stage of integration and testing for the preliminary ground system builds of the JWST Science Operations Center which includes multiple components of the Data Management Subsystem (DMS). The vision for astronomical science and research with the JWST archive introduces both solutions to formal mission requirements and innovation derived from our existing mission systems along with the collective shared experience of our global user community. We are building upon the success of the Hubble Space Telescope archive systems, standards developed by the International Virtual Observatory Alliance, and collaborations with our archive data center partners. In proceeding forward, the “one archive” architectural model presented here is designed to balance the objectives for this new and exciting mission. The STScI JWST archive will deliver high quality calibrated science data products, support multi-mission data discovery and analysis, and provide an infrastructure which supports bridges to highly valued community tools and services.

  18. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  19. Benchmarking. A Guide for Educators.

    ERIC Educational Resources Information Center

    Tucker, Sue

    This book offers strategies for enhancing a school's teaching and learning by using benchmarking, a team-research and data-driven process for increasing school effectiveness. Benchmarking enables professionals to study and know their systems and continually improve their practices. The book is designed to lead a team step by step through the…

  20. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  1. FireHose Streaming Benchmarks

    Energy Science and Technology Software Center (ESTSC)

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  2. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  3. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. PMID:22237134

  4. PSO-based multiobjective optimization with dynamic population size and adaptive local archives.

    PubMed

    Leong, Wen-Fung; Yen, Gary G

    2008-10-01

    Recently, various multiobjective particle swarm optimization (MOPSO) algorithms have been developed to efficiently and effectively solve multiobjective optimization problems. However, the existing MOPSO designs generally adopt a notion to "estimate" a fixed population size sufficiently to explore the search space without incurring excessive computational complexity. To address the issue, this paper proposes the integration of a dynamic population strategy within the multiple-swarm MOPSO. The proposed algorithm is named dynamic population multiple-swarm MOPSO. An additional feature, adaptive local archives, is designed to improve the diversity within each swarm. Performance metrics and benchmark test functions are used to examine the performance of the proposed algorithm compared with that of five selected MOPSOs and two selected multiobjective evolutionary algorithms. In addition, the computational cost of the proposed algorithm is quantified and compared with that of the selected MOPSOs. The proposed algorithm shows competitive results with improved diversity and convergence and demands less computational cost. PMID:18784011

  5. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design

    PubMed Central

    Pache, Roland A.; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J.; Smith, Colin A.; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a “best practice” set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  6. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  7. European distributed seismological data archives infrastructure: EIDA

    NASA Astrophysics Data System (ADS)

    Clinton, John; Hanka, Winfried; Mazza, Salvatore; Pederson, Helle; Sleeman, Reinoud; Stammler, Klaus; Strollo, Angelo

    2014-05-01

    The European Integrated waveform Data Archive (EIDA) is a distributed Data Center system within ORFEUS that (a) securely archives seismic waveform data and related metadata gathered by European research infrastructures, and (b) provides transparent access to the archives for the geosciences research communities. EIDA was founded in 2013 by ORFEUS Data Center, GFZ, RESIF, ETH, INGV and BGR to ensure sustainability of a distributed archive system and the implementation of standards (e.g. FDSN StationXML, FDSN webservices) and coordinate new developments. Under the mandate of the ORFEUS Board of Directors and Executive Committee the founding group is responsible for steering and maintaining the technical developments and organization of the European distributed seismic waveform data archive and the integration within broader multidisciplanry frameworks like EPOS. EIDA currently offers uniform data access to unrestricted data from 8 European archives (www.orfeus-eu.org/eida), linked by the Arclink protocol, hosting data from 75 permanent networks (1800+ stations) and 33 temporary networks (1200+) stations). Moreover, each archive may also provide unique, restricted datasets. A webinterface, developed at GFZ, offers interactive access to different catalogues (EMSC, GFZ, USGS) and EIDA waveform data. Clients and toolboxes like arclink_fetch and ObsPy can connect directly to any EIDA node to collect data. Current developments are directed to the implementation of quality parameters and strong motion parameters.

  8. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PMID:25314367

  9. Virtual machine performance benchmarking.

    PubMed

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising. PMID:21207096

  10. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  11. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  12. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  13. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  14. Data-Intensive Benchmarking Suite

    Energy Science and Technology Software Center (ESTSC)

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  15. The Role of Data Archives in Synoptic Solar Physics

    NASA Astrophysics Data System (ADS)

    Reardon, Kevin

    The detailed study of solar cycle variations requires analysis of recorded datasets spanning many years of observations, that is, a data archive. The use of digital data, combined with powerful database server software, gives such archives new capabilities to provide, quickly and flexibly, selected pieces of information to scientists. Use of standardized protocols will allow multiple databases, independently maintained, to be seamlessly joined, allowing complex searches spanning multiple archives. These data archives also benefit from being developed in parallel with the telescope itself, which helps to assure data integrity and to provide close integration between the telescope and archive. Development of archives that can guarantee long-term data availability and strong compatibility with other projects makes solar-cycle studies easier to plan and realize.

  16. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  17. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  18. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  19. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  20. Researching Television News Archives.

    ERIC Educational Resources Information Center

    Wilhoit, Frances Goins

    To demonstrate the uses and efficiency of major television news archives, a study was conducted to describe major archival programs and to compare the Vanderbilt University Television News Archives and the CBS News Index. Network coverage of an annual news event, the 1983 State of the Union address, is traced through entries in both. The findings…

  1. My Dream Archive

    ERIC Educational Resources Information Center

    Phelps, Christopher

    2007-01-01

    In this article, the author shares his experience as he traveled from island to island with a single objective--to reach the archives. He found out that not all archives are the same. In recent months, his daydreaming in various facilities has yielded a recurrent question on what would constitute the Ideal Archive. What follows, in no particular…

  2. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  3. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  4. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks. PMID:26548140

  5. Soil archives of a Fluvisol: Subsurface analysis and soil history of the medieval city centre of Vlaardingen, the Netherlands - an integral approach

    NASA Astrophysics Data System (ADS)

    Kluiving, Sjoerd; De Ridder, Tim; Van Dasselaar, Marcel; Roozen, Stan; Prins, Maarten; Van Mourik, Jan

    2016-04-01

    In Medieval times the city of Vlaardingen (the Netherlands) was strategically located on the confluence of three rivers, the Meuse, the Merwede and the Vlaarding. A church of early 8th century was already located here. In a short period of time Vlaardingen developed into an international trading place, the most important place in the former county of Holland. Starting from the 11th century the river Meuse threatened to flood the settlement. These floods have been registered in the archives of the fluvisol and were recognised in a multidisciplinary sedimentary analysis of these archives. To secure the future of this vulnerable soil archive currently an extensive interdisciplinary research (76 mechanical drill holes, grain size analysis (GSA), thermo-gravimetric analysis (TGA), archaeological remains, soil analysis, dating methods, micromorphology, and microfauna has started in 2011 to gain knowledge on the sedimentological and pedological subsurface of the mound as well as on the well-preserved nature of the archaeological evidence. Pedogenic features are recorded with soil descriptive, micromorphological and geochemical (XRF) analysis. The soil sequence of 5 meters thickness exhibits a complex mix of 'natural' as well as 'anthropogenic layering' and initial soil formation that enables to make a distinction for relatively stable periods between periods with active sedimentation. In this paper the results of this large-scale project are demonstrated in a number of cross-sections with interrelated geological, pedological and archaeological stratification. Distinction between natural and anthropogenic layering is made on the occurrence of chemical elements phosphor and potassium. A series of four stratigraphic / sedimentary units record the period before and after the flooding disaster. Given the many archaeological remnants and features present in the lower units, we assume that the medieval landscape was drowned while it was inhabited in the 12th century AD. After a

  6. Web-based medical image archive system

    NASA Astrophysics Data System (ADS)

    Suh, Edward B.; Warach, Steven; Cheung, Huey; Wang, Shaohua A.; Tangiral, Phanidral; Luby, Marie; Martino, Robert L.

    2002-05-01

    This paper presents a Web-based medical image archive system in three-tier, client-server architecture for the storage and retrieval of medical image data, as well as patient information and clinical data. The Web-based medical image archive system was designed to meet the need of the National Institute of Neurological Disorders and Stroke for a central image repository to address questions of stroke pathophysiology and imaging biomarkers in stroke clinical trials by analyzing images obtained from a large number of clinical trials conducted by government, academic and pharmaceutical industry researchers. In the database management-tier, we designed the image storage hierarchy to accommodate large binary image data files that the database software can access in parallel. In the middle-tier, a commercial Enterprise Java Bean server and secure Web server manages user access to the image database system. User-friendly Web-interfaces and applet tools are provided in the client-tier for easy access to the image archive system over the Internet. Benchmark test results show that our three-tier image archive system yields fast system response time for uploading, downloading, and querying the image database.

  7. ESA Science Archives and associated VO activities

    NASA Astrophysics Data System (ADS)

    Arviset, Christophe; Baines, Deborah; Barbarisi, Isa; Castellanos, Javier; Cheek, Neil; Costa, Hugo; Fajersztejn, Nicolas; Gonzalez, Juan; Fernandez, Monica; Laruelo, Andrea; Leon, Ignacio; Ortiz, Inaki; Osuna, Pedro; Salgado, Jesus; Tapiador, Daniel

    ESA's European Space Astronomy Centre (ESAC), near Madrid, Spain, hosts most of ESA space based missions' scientific archives, in planetary (Mars Express, Venus Express, Rosetta, Huygens, Giotto, Smart-1, all in ESA Planetary Science Archive), in astronomy (XMM-Newton, Herschel, ISO, Integral, Exosat, Planck) and in solar physics (Soho). All these science archives are operated by a dedicated Science Archives and Virtual Observatory Team (SAT) at ESAC, enabling common and efficient design, development, operations and maintenance of the archives software systems. This also ensures long term preservation and availability of such science archives, as a sustainable service to the science community. ESA space science data can be accessed through powerful and user friendly user interface, as well as from machine scriptable interface and through VO interfaces. Virtual Observatory activities are also fully part of ESA archiving strategy and ESA is a very ac-tive partner in VO initiatives in Europe through Euro-VO AIDA and EuroPlanet and worldwide through the IVOA (International Virtual Observatory Alliance) and the IPDA (International Planetary Data Alliance).

  8. Introduction: Consider the Archive.

    PubMed

    Yale, Elizabeth

    2016-03-01

    In recent years, historians of archives have paid increasingly careful attention to the development of state, colonial, religious, and corporate archives in the early modern period, arguing that power (of various kinds) was mediated and extended through material writing practices in and around archives. The history of early modern science, likewise, has tracked the production of scientific knowledge through the inscription and circulation of written records within and between laboratories, libraries, homes, and public spaces, such as coffeehouses and bookshops. This Focus section interrogates these two bodies of scholarship against each other. The contributors ask how archival digitization is transforming historical practice; how awareness of archival histories can help us to reconceptualize our work as historians of science; how an archive's layered purposes, built up over centuries of record keeping, can shape the historical narratives we write; and how scientific knowledge emerging from archives gained authority and authenticity. PMID:27197412

  9. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  10. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  11. Archive and records management-Fiscal year 2010 offline archive media trade study

    USGS Publications Warehouse

    Bodoh, Tom; Boettcher, Ken; Gacke, Ken; Greenhagen, Cheryl; Engelbrecht, Al

    2010-01-01

    This document is a trade study comparing offline digital archive storage technologies. The document compares and assesses several technologies and recommends which technologies could be deployed as the next generation standard for the U.S. Geological Survey (USGS). Archives must regularly migrate to the next generation of digital archive technology, and the technology selected must maintain data integrity until the next migration. This document is the fiscal year 2010 (FY10) revision of a study completed in FY01 and revised in FY03, FY04, FY06, and FY08.

  12. Soil archives of a Fluvisol: subsurface analysis and soil history of the medieval city centre of Vlaardingen, the Netherlands - an integral approach

    NASA Astrophysics Data System (ADS)

    Kluiving, Sjoerd; de Ridder, Tim; van Dasselaar, Marcel; Roozen, Stan; Prins, Maarten

    2016-06-01

    The medieval city of Vlaardingen (the Netherlands) was strategically located on the confluence of three rivers, the Maas, the Merwede, and the Vlaarding. A church of the early 8th century AD was already located here. In a short period of time, Vlaardingen developed in the 11th century AD into an international trading place and into one of the most important places in the former county of Holland. Starting from the 11th century AD, the river Maas repeatedly threatened to flood the settlement. The flood dynamics were registered in Fluvisol archives and were recognised in a multidisciplinary sedimentary analysis of these archives. To secure the future of these vulnerable soil archives an extensive interdisciplinary research effort (76 mechanical drill holes, grain size analysis (GSA), thermo-gravimetric analysis (TGA), archaeological remains, soil analysis, dating methods, micromorphology, and microfauna) started in 2011 to gain knowledge on the sedimentological and pedological subsurface of the settlement mound as well as on the well-preserved nature of the archaeological evidence. Pedogenic features are recorded with soil description, micromorphological, and geochemical (XRF - X-ray fluorescence) analysis. The soil sequence of 5 m thickness exhibits a complex mix of "natural" as well as "anthropogenic" layering and initial soil formation that enables us to make a distinction between relatively stable periods and periods with active sedimentation. In this paper the results of this interdisciplinary project are demonstrated in a number of cross-sections with interrelated geological, pedological, and archaeological stratification. A distinction between natural and anthropogenic layering is made on the basis of the occurrence of the chemical elements phosphor and potassium. A series of four stratigraphic and sedimentary units record the period before and after the flooding disaster. Given the many archaeological remnants and features present in the lower units, in

  13. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  14. VizieR Online Data Catalog: Gaia FGK benchmark stars: abundances (Jofre+, 2015)

    NASA Astrophysics Data System (ADS)

    Jofre, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Masseron, T.; Nordlander, T.; Chemin, L.; Worley, C. C.; van Eck, S.; Hourihane, A.; Gilmore, G.; Adibekyan, V.; Bergemann, M.; Cantat-Gaudin, T.; Delgado-Mena, E.; Gonzalez Hernandez, J. I.; Guiglion, G.; Lardo, C.; de Laverny, P.; Lind, K.; Magrini, L.; Mikolaitis, S.; Montes, D.; Pancino, E.; Recio-Blanco, A.; Sordo, R.; Sousa, S.; Tabernero, H. M.; Vallenari, A.

    2015-07-01

    As in our previous work on the subject, we built a library of high- resolution spectra of the GBS, using our own observations on the NARVAL spectrograph at Pic du Midi in addition to archived data. The abundance of alpha and iron peak elements of the Gaia FGK benchmark stars is determined by combining 8 methods. The Tables indicate the elemental abundances determined for each star, element, line and method. (36 data files).

  15. Developing Financial Benchmarks for Critical Access Hospitals

    PubMed Central

    Pink, George H.; Holmes, George M.; Slifkin, Rebecca T.; Thompson, Roger E.

    2009-01-01

    This study developed and applied benchmarks for five indicators included in the CAH Financial Indicators Report, an annual, hospital-specific report distributed to all critical access hospitals (CAHs). An online survey of Chief Executive Officers and Chief Financial Officers was used to establish benchmarks. Indicator values for 2004, 2005, and 2006 were calculated for 421 CAHs and hospital performance was compared to the benchmarks. Although many hospitals performed better than benchmark on one indicator in 1 year, very few performed better than benchmark on all five indicators in all 3 years. The probability of performing better than benchmark differed among peer groups. PMID:19544935

  16. Benchmarking the Sandia Pulsed Reactor III cavity neutron spectrum for electronic parts calibration and testing

    SciTech Connect

    Kelly, J.G.; Griffin, P.J.; Fan, W.C.

    1993-08-01

    The SPR III bare cavity spectrum and integral parameters have been determined with 24 measured spectrum sensor responses and an independent, detailed, MCNP transport calculation. This environment qualifies as a benchmark field for electronic parts testing.

  17. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  18. Sustainable value assessment of farms using frontier efficiency benchmarks.

    PubMed

    Van Passel, Steven; Van Huylenbroeck, Guido; Lauwers, Ludwig; Mathijs, Erik

    2009-07-01

    Appropriate assessment of firm sustainability facilitates actor-driven processes towards sustainable development. The methodology in this paper builds further on two proven methodologies for the assessment of sustainability performance: it combines the sustainable value approach with frontier efficiency benchmarks. The sustainable value methodology tries to relate firm performance to the use of different resources. This approach assesses contributions to corporate sustainability by comparing firm resource productivity with the resource productivity of a benchmark, and this for all resources considered. The efficiency is calculated by estimating the production frontier indicating the maximum feasible production possibilities. In this research, the sustainable value approach is combined with efficiency analysis methods to benchmark sustainability assessment. In this way, the production theoretical underpinnings of efficiency analysis enrich the sustainable value approach. The methodology is presented using two different functional forms: the Cobb-Douglas and the translog functional forms. The simplicity of the Cobb-Douglas functional form as benchmark is very attractive but it lacks flexibility. The translog functional form is more flexible but has the disadvantage that it requires a lot of data to avoid estimation problems. Using frontier methods for deriving firm specific benchmarks has the advantage that the particular situation of each company is taken into account when assessing sustainability. Finally, we showed that the methodology can be used as an integrative sustainability assessment tool for policy measures. PMID:19553001

  19. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  20. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  1. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  2. PyMPI Dynamic Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  3. Real-Time Benchmark Suite

    Energy Science and Technology Software Center (ESTSC)

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  4. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  5. The GTC Public Archive

    NASA Astrophysics Data System (ADS)

    Alacid, J. Manuel; Solano, Enrique

    2015-12-01

    The Gran Telescopio Canarias (GTC) archive is operational since November 2011. The archive, maintained by the Data Archive Unit at CAB in the framework of the Spanish Virtual Observatory project, provides access to both raw and science ready data and has been designed in compliance with the standards defined by the International Virtual Observatory Alliance (IVOA) to guarantee a high level of data accessibility and handling. In this presentation I will describe the main capabilities the GTC archive offers to the community, in terms of functionalities and data collections, to carry out an efficient scientific exploitation of GTC data.

  6. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Sediment-Associated Biota

    SciTech Connect

    Hull, R.N.

    1993-01-01

    A hazardous waste site may contain hundreds of chemicals; therefore, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a screening assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, further analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. The use of multiple benchmarks is recommended for screening chemicals of concern in sediments. Integrative benchmarks developed for the National Oceanic and Atmospheric Administration and the Florida Department of Environmental Protection are included for inorganic and organic chemicals. Equilibrium partitioning benchmarks are included for screening nonionic organic chemicals. Freshwater sediment effect concentrations developed as part of the U.S. Environmental Protection Agency's (EPA's) Assessment and Remediation of Contaminated Sediment Project are included for inorganic and organic chemicals (EPA 1996). Field survey benchmarks developed for the Ontario Ministry of the Environment are included for inorganic and organic chemicals. In addition, EPA-proposed sediment quality criteria are included along with screening values from EPA Region IV and Ecotox Threshold values from the EPA Office of Solid Waste and Emergency Response. Pore water analysis is recommended for ionic organic compounds; comparisons are then made against water quality benchmarks. This report is an update of three prior reports (Jones et al

  7. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  8. The Growth of Benchmarking in Higher Education.

    ERIC Educational Resources Information Center

    Schofield, Allan

    2000-01-01

    Benchmarking is used in higher education to improve performance by comparison with other institutions. Types used include internal, external competitive, external collaborative, external transindustry, and implicit. Methods include ideal type (or gold) standard, activity-based benchmarking, vertical and horizontal benchmarking, and comparative…

  9. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  10. Gaia FGK benchmark stars: new candidates at low metallicities

    NASA Astrophysics Data System (ADS)

    Hawkins, K.; Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Casagrande, L.; Gilmore, G.; Lind, K.; Magrini, L.; Masseron, T.; Pancino, E.; Randich, S.; Worley, C. C.

    2016-07-01

    Context. We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars has up to five metal-poor stars but no recommended stars within -2.0 < [Fe/H] < -1.0 dex. However, this metallicity regime is critical to calibrate properly. Aims: In this paper, we aim to add candidate Gaia benchmark stars inside of this metal-poor gap. We began with a sample of 21 metal-poor stars which was reduced to 10 stars by requiring accurate photometry and parallaxes, and high-resolution archival spectra. Methods: The procedure used to determine the stellar parameters was similar to the previous works in this series for consistency. The difference was to homogeneously determine the angular diameter and effective temperature (Teff) of all of our stars using the Infrared Flux Method utilizing multi-band photometry. The surface gravity (log g) was determined through fitting stellar evolutionary tracks. The [Fe/H] was determined using four different spectroscopic methods fixing the Teff and log g from the values determined independent of spectroscopy. Results: We discuss, star-by-star, the quality of each parameter including how it compares to literature, how it compares to a spectroscopic run where all parameters are free, and whether Fe i ionisation-excitation balance is achieved. Conclusions: From the 10 stars, we recommend a sample of five new metal-poor benchmark candidate stars which have consistent Teff, log g, and [Fe/H] determined through several means. These stars, which are within -1.3 < [Fe/H] < -1.0, can be used for calibration and validation purpose of stellar parameter and abundance pipelines and should be of highest

  11. Cancer imaging archive available

    Cancer.gov

    NCI’s Cancer Imaging Program has inaugurated The Cancer Imaging Archive (TCIA), a web-accessible and unique clinical imaging archive linked to The Cancer Genome Atlas (TCGA) tissue repository. It contains a large proportion of original, pre-surgical MRIs from cases that have been genomically characterized in TCGA.

  12. [Church Archives; Selected Papers.

    ERIC Educational Resources Information Center

    Abraham, Terry; And Others

    Papers presented at the Institute which were concerned with keeping of church archives are entitled: "St. Mary's Episcopal Church, Eugene, Oregon;""Central Lutheran Church, Eugene, Oregon: A History;""Mormon Church Archives: An Overview;""Sacramental Records of St. Mary's Catholic Church, Eugene, Oregon;""Chronology of St. Mary's Catholic Church,…

  13. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  14. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  15. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  16. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  17. MPI Multicore Torus Communication Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  18. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  19. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  20. An Evolutionary Algorithm with Double-Level Archives for Multiobjective Optimization.

    PubMed

    Chen, Ni; Chen, Wei-Neng; Gong, Yue-Jiao; Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Tan, Yu-Song

    2015-09-01

    Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problem-level and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed. PMID:25343775

  1. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  2. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to

  3. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  4. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    NASA Technical Reports Server (NTRS)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  5. The Golosyiv plate archive digitisation

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.; Sergeev, A. V.; Pakuliak, L. K.; Yatsenko, A. I.

    2007-08-01

    The plate archive of the Main Astronomical Observatory of the National Academy of Sciences of Ukraine (Golosyiv, Kyiv) includes about 85 000 plates which have been taken in various observational projects during 1950-2005. Among them are about 25 000 of direct northern sky area plates and more than 600 000 plates containing stellar, planetary and active solar formations spectra. Direct plates have a limiting magnitude of 14.0-16.0 mag. Since 2002 we have been organising the storage, safeguarding, cataloguing and digitization of the plate archive. The very initial task was to create the automated system for detection of astronomical objects and phenomena, search of optical counterparts in the directions of gamma-ray bursts, research of long period, flare and other variable stars, search and rediscovery of asteroids, comets and other Solar System bodies to improve the elements of their orbits, informational support of CCD observations and space projects, etc. To provide higher efficiency of this work we have prepared computer readable catalogues and database for 250 000 direct wide field plates. Now the catalogues have been adapted to Wide Field Plate Database (WFPDB) format and integrated into this world database. The next step will be adaptation of our catalogues, database and images to standards of the IVOA. Some magnitude and positional accuracy estimations for Golosyiv archive plates have been done. The photometric characteristics of the images of NGC 6913 cluster stars on two plates of the Golosyiv's double wide angle astrograph have been determined. Very good conformity of the photometric characteristics obtained with external accuracies of 0.13 and 0.15 mag. has been found. The investigation of positional accuracy have been made with A3± format fixed bed scanner (Microtek ScanMaker 9800XL TMA). It shows that the scanner has non-detectable systematic errors on the X-axis, and errors of ± 15 μm on the Y-axis. The final positional errors are about ± 2 μm (

  6. O3-DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHE-compliant project pushing the e-health integration in the world.

    PubMed

    Inchingolo, Paolo; Beltrame, Marco; Bosazzi, Pierpaolo; Cicuta, Davide; Faustini, Giorgio; Mininel, Stefano; Poli, Andrea; Vatta, Federica

    2006-01-01

    After many years of study, development and experimentation of open PACS and Image workstation solutions including management of medical data and signals (DPACS project), the research and development at the University of Trieste have recently been directed towards Java-based, IHE compliant and multi-purpose servers and clients. In this paper an original Image-Data Manager/Archiver (O3-DPACS) and a universal Image-Data Display (HDW2) are described. O3-DPACS is also part of a new project called Open Three (O3) Consortium, promoting Open Source adoption in e-health at European and world-wide levels. This project aims to give a contribution to the development of e-health through the study of Healthcare Information Systems and the contemporary proposal of new concepts, designs and solutions for the management of health data in an integrated environment: hospitals, Regional Health Information Organizations and citizens (home-care, mobile-care and ambient assisted living). PMID:17055700

  7. The Herschel Science Archive

    NASA Astrophysics Data System (ADS)

    Verdugo, Eva

    2015-12-01

    The Herschel mission required a Science Archive able to serve data to very different users: The own Data Analysis Software (both Pipeline and Interactive Analysis), the consortia of the different instruments and the scientific community. At the same time, the KP consortia were committed to deliver to the Herschel Science Centre, the processed products corresponding to the data obtained as part of their Science Demonstration Phase and the Herschel Archive should include the capability to store and deliver them. I will explain how the current Herschel Science Archive is designed to cover all these requirements.

  8. Databases and Archiving for CryoEM.

    PubMed

    Patwardhan, A; Lawson, C L

    2016-01-01

    CryoEM in structural biology is currently served by three public archives-EMDB for 3DEM reconstructions, PDB for models built from 3DEM reconstructions, and EMPIAR for the raw 2D image data used to obtain the 3DEM reconstructions. These archives play a vital role for both the structural community and the wider biological community in making the data accessible so that results may be reused, reassessed, and integrated with other structural and bioinformatics resources. The important role of the archives is underpinned by the fact that many journals mandate the deposition of data to PDB and EMDB on publication. The field is currently undergoing transformative changes where on the one hand high-resolution structures are becoming a routine occurrence while on the other hand electron tomography is enabling the study of macromolecules in the cellular context. Concomitantly the archives are evolving to best serve their stakeholder communities. In this chapter, we describe the current state of the archives, resources available for depositing, accessing, searching, visualizing and validating data, on-going community-wide initiatives and opportunities, and challenges for the future. PMID:27572735

  9. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  10. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    PubMed

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  11. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  12. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  13. Impact of a computer-aided detection (CAD) system integrated into a picture archiving and communication system (PACS) on reader sensitivity and efficiency for the detection of lung nodules in thoracic CT exams.

    PubMed

    Bogoni, Luca; Ko, Jane P; Alpert, Jeffrey; Anand, Vikram; Fantauzzi, John; Florin, Charles H; Koo, Chi Wan; Mason, Derek; Rom, William; Shiau, Maria; Salganicoff, Marcos; Naidich, David P

    2012-12-01

    The objective of this study is to assess the impact on nodule detection and efficiency using a computer-aided detection (CAD) device seamlessly integrated into a commercially available picture archiving and communication system (PACS). Forty-eight consecutive low-dose thoracic computed tomography studies were retrospectively included from an ongoing multi-institutional screening study. CAD results were sent to PACS as a separate image series for each study. Five fellowship-trained thoracic radiologists interpreted each case first on contiguous 5 mm sections, then evaluated the CAD output series (with CAD marks on corresponding axial sections). The standard of reference was based on three-reader agreement with expert adjudication. The time to interpret CAD marking was automatically recorded. A total of 134 true-positive nodules, measuring 3 mm and larger were included in our study; with 85 ≥ 4 and 50 ≥ 5 mm in size. Readers detection improved significantly in each size category when using CAD, respectively, from 44 to 57 % for ≥3 mm, 48 to 61 % for ≥4 mm, and 44 to 60 % for ≥5 mm. CAD stand-alone sensitivity was 65, 68, and 66 % for nodules ≥3, ≥4, and ≥5 mm, respectively, with CAD significantly increasing the false positives for two readers only. The average time to interpret and annotate a CAD mark was 15.1 s, after localizing it in the original image series. The integration of CAD into PACS increases reader sensitivity with minimal impact on interpretation time and supports such implementation into daily clinical practice. PMID:22710985

  14. Latin American Archives.

    ERIC Educational Resources Information Center

    Belsunce, Cesar A. Garcia

    1983-01-01

    Examination of the situation of archives in four Latin American countries--Argentina, Brazil, Colombia, and Costa Rica--highlights national systems, buildings, staff, processing of documents, accessibility and services to the public and publications and extension services. (EJS)

  15. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  16. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  17. Cassini Archive Tracking System

    NASA Technical Reports Server (NTRS)

    Conner, Diane; Sayfi, Elias; Tinio, Adrian

    2006-01-01

    The Cassini Archive Tracking System (CATS) is a computer program that enables tracking of scientific data transfers from originators to the Planetary Data System (PDS) archives. Without CATS, there is no systematic means of locating products in the archive process or ensuring their completeness. By keeping a database of transfer communications and status, CATS enables the Cassini Project and the PDS to efficiently and accurately report on archive status. More importantly, problem areas are easily identified through customized reports that can be generated on the fly from any Web-enabled computer. A Web-browser interface and clearly defined authorization scheme provide safe distributed access to the system, where users can perform functions such as create customized reports, record a transfer, and respond to a transfer. CATS ensures that Cassini provides complete science archives to the PDS on schedule and that those archives are available to the science community by the PDS. The three-tier architecture is loosely coupled and designed for simple adaptation to multimission use. Written in the Java programming language, it is portable and can be run on any Java-enabled Web server.

  18. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  19. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  20. TRENDS: Compendium of Benchmark Objects

    NASA Astrophysics Data System (ADS)

    Gonzales, Erica J.; Crepp, Justin R.; Bechter, Eric; Johnson, John A.; Montet, Benjamin T.; Howard, Andrew; Marcy, Geoffrey W.; Isaacson, Howard T.

    2016-01-01

    The physical properties of faint stellar and substellar objects are highly uncertain. For example, the masses of brown dwarfs are usually inferred using theoretical models, which are age dependent and have yet to be properly tested. With the goal of identifying new benchmark objects through observations with NIRC2 at Keck, we have carried out a comprehensive adaptive-optics survey as part of the TRENDS (TaRgetting bENchmark-objects with Doppler Spectroscopy) high-contrast imaging program. TRENDS targets nearby (d < 100 pc), Sun-like stars showing long-term radial velocity accelerations. We present the discovery of 28 confirmed, co-moving companions as well as 19 strong candidate companions to F-, G-, and K-stars with well-determined parallaxes and metallicities. Benchmark objects of this nature lend themselves to a three dimensional orbit determination that will ultimately yield a precise dynamical mass. Unambiguous mass measurements of very low mass companions, which straddle the hydrogen-burning boundary, will allow our compendium of objects to serve as excellent testbeds to substantiate theoretical evolutionary and atmospheric models in regimes where they currently breakdown (low temperature, low mass, and old age).

  1. Characterizing universal gate sets via dihedral benchmarking

    NASA Astrophysics Data System (ADS)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  2. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  3. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  4. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  5. The GIRAFFE Archive: 1D and 3D Spectra

    NASA Astrophysics Data System (ADS)

    Royer, F.; Jégouzo, I.; Tajahmady, F.; Normand, J.; Chilingarian, I.

    2013-10-01

    The GIRAFFE Archive (http://giraffe-archive.obspm.fr) contains the reduced spectra observed with the intermediate and high resolution multi-fiber spectrograph installed at VLT/UT2 (ESO). In its multi-object configuration and the different integral field unit configurations, GIRAFFE produces 1D spectra and 3D spectra. We present here the status of the archive and the different functionalities to select and download both 1D and 3D data products, as well as the present content. The two collections are available in the VO: the 1D spectra (summed in the case of integral field observations) and the 3D field observations. These latter products can be explored using the VO Paris Euro3D Client (http://voplus.obspm.fr/ chil/Euro3D).

  6. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to

  7. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    ERIC Educational Resources Information Center

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  8. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  9. Model Predictions to the 2005 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin; Park, Joon-Soo

    2006-03-01

    The World Federation of NDE Centers (WFNDEC) has addressed the 2005 ultrasonic benchmark problems including linear scanning of the side drilled hole (SDH) specimen with oblique incidence with an emphasis on further study on SV-wave responses of the SDH versus angles around 60 degrees and responses of a circular crack. To solve these problems, we adopted the multi-Gaussian beam model as beam models and the Kirchhoff approximation and the separation of variables method as far-field scattering models. By integration of the beam and scattering models and the system efficiency factor obtained from the given reference experimental setups provided by Center for Nondestructive Evaluation into our ultrasonic measurement models, we predicted the responses of the SDH and the circular cracks (pill-box crack like flaws). This paper summarizes our models and predicted results for the 2005 ultrasonic benchmark problems.

  10. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  11. Benchmarking Nonlinear Turbulence Simulations on Alcator C-Mod

    SciTech Connect

    M.H. Redi; C.L. Fiore; W. Dorland; M.J. Greenwald; G.W. Hammett; K. Hill; D. McCune; D.R. Mikkelsen; G. Rewoldt; J.E. Rice

    2004-06-22

    Linear simulations of plasma microturbulence are used with recent radial profiles of toroidal velocity from similar plasmas to consider nonlinear microturbulence simulations and observed transport analysis on Alcator C-Mod. We focus on internal transport barrier (ITB) formation in fully equilibrated H-mode plasmas with nearly flat velocity profiles. Velocity profile data, transport analysis and linear growth rates are combined to integrate data and simulation, and explore the effects of toroidal velocity on benchmarking simulations. Areas of interest for future nonlinear simulations are identified. A good gyrokinetic benchmark is found in the plasma core, without extensive nonlinear simulations. RF-heated C-Mod H-mode experiments, which exhibit an ITB, have been studied with the massively parallel code GS2 towards validation of gyrokinetic microturbulence models. New, linear, gyrokinetic calculations are reported and discussed in connection with transport analysis near the ITB trigger time of shot No.1001220016.

  12. PROCEDURES FOR THE DERIVATION OF EQUILIBRIUM PARTITIONING SEDIMENT BENCHMARKS (ESBS) FOR THE PROTECTION OF BENTHIC ORGANISMS: ENDRIN

    EPA Science Inventory

    Under the Clean Water Act, EPA and the States develop programs for protecting the chemical, physical, and biological integrity of the nation's waters. To support these programs, efforts are conducted to develop and publish equilibrium partitioning sediment benchmarks (ESBs) for ...

  13. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  14. The Isothermal Dendritic Growth Experiment Archive

    NASA Astrophysics Data System (ADS)

    Koss, Matthew

    2009-03-01

    The growth of dendrites is governed by the interplay between two simple and familiar processes---the irreversible diffusion of energy, and the reversible work done in the formation of new surface area. To advance our understanding of these processes, NASA sponsored a project that flew on the Space Shuttle Columbia is 1994, 1996, and 1997 to record and analyze benchmark data in an apparent-microgravity ``laboratory.'' In this laboratory, energy transfer by gravity driven convection was essentially eliminated and one could test independently, for the first time, both components of dendritic growth theory. The analysis of this data shows that although the diffusion of energy can be properly accounted for, the results from interfacial physics appear to be in disagreement and alternate models should receive increased attention. Unfortunately, currently and for the foreseeable future, there is no access or financial support to develop and conduct additional experiments of this type. However, the benchmark data of 35mm photonegatives, video, and all supporting instrument data are now available at the IDGE Archive at the College of the Holy Cross. This data may still have considerable relevance to researchers working specifically with dendritic growth, and more generally those working in the synthesis, growth & processing of materials, multiscale computational modeling, pattern formation, and systems far from equilibrium.

  15. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  16. Sequenced Benchmarks for Geography and History

    ERIC Educational Resources Information Center

    Kendall, John S.; Richardson, Amy T.; Ryan, Susan E.

    2005-01-01

    This report is one in a series of reference documents designed to assist those who are directly involved in the revision and improvement of content standards, as well as teachers who use standards and benchmarks to guide everyday instruction. Reports in the series provide information about how benchmarks might best appear in a sequence of…

  17. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  18. Reading Robin Kelsey's "Archive Style" across the Archival Divide

    ERIC Educational Resources Information Center

    Schwartz, Joan M.

    2008-01-01

    This article presents the author's comments on Robin Kelsey's "Archive Style: Photographs and Illustrations for U.S. Surveys, 1850-1890," a book about government documents that happen to be visual materials. The word "archive" now has intellectual cache in the academic world, but its currency has little to do with the "real world of archives" as…

  19. Training in the Archives: Archival Research as Professional Development

    ERIC Educational Resources Information Center

    Buehl, Jonathan; Chute, Tamar; Fields, Anne

    2012-01-01

    This article describes the rationale and efficacy of a graduate-level teaching module providing loosely structured practice with real archives. Introducing early career scholars to archival methods changed their beliefs about knowledge, research, teaching, and their discipline(s). This case study suggests that archives can be productive training…

  20. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  1. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  2. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    SciTech Connect

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-06-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented.

  3. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  4. The Ethics of Archival Research

    ERIC Educational Resources Information Center

    McKee, Heidi A.; Porter, James E.

    2012-01-01

    What are the key ethical issues involved in conducting archival research? Based on examination of cases and interviews with leading archival researchers in composition, this article discusses several ethical questions and offers a heuristic to guide ethical decision making. Key to this process is recognizing the person-ness of archival materials.…

  5. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  6. LBT Distributed Archive: Status and Features

    NASA Astrophysics Data System (ADS)

    Knapic, C.; Smareglia, R.; Thompson, D.; Grede, G.

    2011-07-01

    After the first release of the LBT Distributed Archive, this successful collaboration is continuing within the LBT corporation. The IA2 (Italian Center for Astronomical Archive) team had updated the LBT DA with new features in order to facilitate user data retrieval while abiding by VO standards. To facilitate the integration of data from any new instruments, we have migrated to a new database, developed new data distribution software, and enhanced features in the LBT User Interface. The DBMS engine has been changed to MySQL. Consequently, the data handling software now uses java thread technology to update and synchronize the main storage archives on Mt. Graham and in Tucson, as well as archives in Trieste and Heidelberg, with all metadata and proprietary data. The LBT UI has been updated with additional features allowing users to search by instrument and some of the more important characteristics of the images. Finally, instead of a simple cone search service over all LBT image data, new instrument specific SIAP and cone search services have been developed. They will be published in the IVOA framework later this fall.

  7. Video data annotation, archiving, and access

    NASA Astrophysics Data System (ADS)

    Wilkin, D.; Connor, J.; Stout, N. J.; Walz, K.; Schlining, K.; Graybeal, J.

    2002-12-01

    Scientifically useful, high-quality video data can be challenging to integrate with other data, and to analyze and archive for use in ocean science. The Monterey Bay Aquarium Research Institute (MBARI) uses high-resolution video equipment to record over 300 remotely operated vehicle dives per year. Over the past 14 years, 13,000 videotapes have been archived and maintained as a centralized institutional resource. MBARI has developed a set of software applications to annotate and access video data. Users can identify the location of video sequences using a data query component; complex queries can be made by constraining temporal, spatial, or physical parameters (e.g., season, location, or depth). The applications reference a knowledge base of over 3,000 biological, geological and technical terms, providing consistent hierarchical information about objects and associated descriptions for annotating video at sea or on shore. The annotation, knowledge base, and query components together provide a comprehensive video archive software system that can be applied to a variety of scientific disciplines. Also in development, using the XML data format, is an interactive reference interface to explore MBARI's deep-sea knowledge base. When complete, the full software system will be disseminated to the research community via the web or CD, to help meet the challenges inherent in archiving video data.

  8. Challenges in simulation automation and archival.

    SciTech Connect

    Blacker, Teddy Dean

    2010-09-01

    The challenges of simulation streamlining and automation continue. The need for analysis verification, reviews, quality assurance, pedigree, and archiving are strong. These automation and archival needs can alternate between competing and complementing when determining how to improve the analysis environment and process. The needs compete for priority, resource allocation, and business practice importance. Likewise, implementation strategies of both automation and archival can swing between rather local work groups to more global corporate initiatives. Questions abound about needed connectivity (and the extent of this connectivity) to various CAD systems, product data management (PDM) systems, test data repositories and various information management implementations. This is a complex set of constraints. This presentation will bring focus to this complex environment through sharing experiences. The experiences are those gleaned over years of effort at Sandia to make reasonable sense out of the decisions to be made. It will include a discussion of integration and development of home grown tools for both automation and archival. It will also include an overview of efforts to understand local requirements, compare in-house tools to commercial offerings against those requirements, and options for future progress. Hopefully, sharing this rich set of experiences may prove useful to others struggling to make progress in their own environments.

  9. Twitter Stream Archiver

    SciTech Connect

    Steed, Chad Allen

    2014-07-01

    The Twitter Archiver system allows a user to enter their Twitter developer account credentials (obtained separately from the Twitter developer website) and read from the freely available Twitter sample stream. The Twitter sample stream provides a random sample of the overall volume of tweets that are contributed by users to the system. The Twitter Archiver system consumes the stream and serializes the information to text files at some predefined interval. A separate utility reads the text files and creates a searchable index using the open source Apache Lucene text indexing system.

  10. Twitter Stream Archiver

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    The Twitter Archiver system allows a user to enter their Twitter developer account credentials (obtained separately from the Twitter developer website) and read from the freely available Twitter sample stream. The Twitter sample stream provides a random sample of the overall volume of tweets that are contributed by users to the system. The Twitter Archiver system consumes the stream and serializes the information to text files at some predefined interval. A separate utility reads themore » text files and creates a searchable index using the open source Apache Lucene text indexing system.« less

  11. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  12. Effective File I/O Bandwidth Benchmark

    SciTech Connect

    Rabenseifner, R.; Koniges, A.E.

    2000-02-15

    The effective I/O bandwidth benchmark (b{_}eff{_}io) covers two goals: (1) to achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications, and (2) to get detailed information about several access patterns and buffer lengths. The benchmark examines ''first write'', ''rewrite'' and ''read'' access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and well-formed I/O is compared with non-well formed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing (b{_}eff) that characterizes the message passing capabilities of a system in a few minutes. First results of the b{_}eff{_}io benchmark are given for IBM SP and Cray T3E systems and compared with existing benchmarks based on parallel Posix-I/O.

  13. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  14. Analytical Radiation Transport Benchmarks for The Next Century

    SciTech Connect

    B.D. Ganapol

    2005-01-19

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated.

  15. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  16. ASIS healthcare security benchmarking study.

    PubMed

    2001-01-01

    Effective security has aligned itself into the everyday operations of a healthcare organization. This is evident in every regional market segment, regardless of size, location, and provider clinical expertise or organizational growth. This research addresses key security issues from an acute care provider to freestanding facilities, from rural hospitals and community hospitals to large urban teaching hospitals. Security issues and concerns are identified and addressed daily by senior and middle management. As provider campuses become larger and more diverse, the hospitals surveyed have identified critical changes and improvements that are proposed or pending. Mitigating liabilities and improving patient, visitor, and/or employee safety are consequential to the performance and viability of all healthcare providers. Healthcare organizations have identified the requirement to compete for patient volume and revenue. The facility that can deliver high-quality healthcare in a comfortable, safe, secure, and efficient atmosphere will have a significant competitive advantage over a facility where patient or visitor security and safety is deficient. Continuing changes in healthcare organizations' operating structure and healthcare geographic layout mean changes in leadership and direction. These changes have led to higher levels of corporate responsibility. As a result, each organization participating in this benchmark study has added value and will derive value for the overall benefit of the healthcare providers throughout the nation. This study provides a better understanding of how the fundamental security needs of security in healthcare organizations are being addressed and its solutions identified and implemented. PMID:11602980

  17. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  18. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  19. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  20. Mapping the Archives: 3

    ERIC Educational Resources Information Center

    Jackson, Anthony

    2013-01-01

    With this issue, "Research in Drama Education" (RiDE) continues its occasional series of short informational pieces on archives in the field of drama and theatre education and applied theatre and performance. Each instalment includes summaries of one or more collections of significant material in the field. Over time, this will build in to a…

  1. Mapping the Archives: 2

    ERIC Educational Resources Information Center

    Jackson, Anthony

    2012-01-01

    With this issue, "RiDE" continues its new occasional series of short informational pieces on archives in the field of drama and theatre education and applied theatre and performance. Each instalment includes summaries of one or more collections of significant material in the field. Over time this will build into a readily accessible directory of…

  2. Noroviruses in Archival Samples

    PubMed Central

    Skraber, Sylvain; Italiaander, Ronald; Lodder, Willemijn J.

    2005-01-01

    Application of recent techniques to detect current pathogens in archival effluent samples collected and concentrated in 1987 lead to the characterization of norovirus GGII.6 Seacroft, unrecognized until 1990 in a clinical sample. Retrospective studies will likely increase our knowledge about waterborne transmission of emerging pathogens. PMID:15757575

  3. Aspects of Electronic Archives.

    ERIC Educational Resources Information Center

    Blake, Monica

    1986-01-01

    Reviews the current status of electronic archiving, especially in Great Britain and the United States, including current use of various electronic storage media; advantages and utilizations of optical disk technology; trends toward full-text databases and increased videotex use; growing quantity of electronic information; and problems in archiving…

  4. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  5. NASA Data Archive Evaluation

    NASA Technical Reports Server (NTRS)

    Holley, Daniel C.; Haight, Kyle G.; Lindstrom, Ted

    1997-01-01

    The purpose of this study was to expose a range of naive individuals to the NASA Data Archive and to obtain feedback from them, with the goal of learning how useful people with varied backgrounds would find the Archive for research and other purposes. We processed 36 subjects in four experimental categories, designated in this report as C+R+, C+R-, C-R+ and C-R-, for computer experienced researchers, computer experienced non-researchers, non-computer experienced researchers, and non-computer experienced non-researchers, respectively. This report includes an assessment of general patterns of subject responses to the various aspects of the NASA Data Archive. Some of the aspects examined were interface-oriented, addressing such issues as whether the subject was able to locate information, figure out how to perform desired information retrieval tasks, etc. Other aspects were content-related. In doing these assessments, answers given to different questions were sometimes combined. This practice reflects the tendency of the subjects to provide answers expressing their experiences across question boundaries. Patterns of response are cross-examined by subject category in order to bring out deeper understandings of why subjects reacted the way they did to the archive. After the general assessment, there will be a more extensive summary of the replies received from the test subjects.

  6. Mapping the Archives 1

    ERIC Educational Resources Information Center

    Jackson, Anthony

    2012-01-01

    With this issue, "RiDE" begins a new occasional series of short informational pieces on archives in the field of drama and theatre education and applied theatre and performance. Each instalment will include summaries of several collections of significant material in the field. Over time this will build into a readily accessible annotated directory…

  7. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  8. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  9. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  10. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  11. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  12. The Impact of Computerization on Archival Finding Aids: A RAMP Study.

    ERIC Educational Resources Information Center

    Kitching, Christopher

    This report is based on a questionnaire sent to 32 selected National Archives and on interviews with archivists from eight countries. Geared to the needs of developing countries, the report covers: (1) the impact of computerization on finding aids; (2) advantages and problems of computerization, including enhanced archival control, integration of…

  13. The Semantic Mapping of Archival Metadata to the CIDOC CRM Ontology

    ERIC Educational Resources Information Center

    Bountouri, Lina; Gergatsoulis, Manolis

    2011-01-01

    In this article we analyze the main semantics of archival description, expressed through Encoded Archival Description (EAD). Our main target is to map the semantics of EAD to the CIDOC Conceptual Reference Model (CIDOC CRM) ontology as part of a wider integration architecture of cultural heritage metadata. Through this analysis, it is concluded…

  14. Archive of observations of periodic comet Crommelin made during its 1983-84 apparition

    NASA Technical Reports Server (NTRS)

    Sekanina, Z. (Editor); Aronsson, M.

    1985-01-01

    This is an archive of 680 reduced observations of Periodic Comet Crommelin made during its 1984 apparition. The archive integrates reports by members of the eight networks of the International Halley Watch (IHW) and presents the results of a trial run designed to test the preparedness of the IHW organization for the current apparition of Periodic Comet Halley.

  15. Benchmarking of optical dimerizer systems.

    PubMed

    Pathak, Gopal P; Strickland, Devin; Vrana, Justin D; Tucker, Chandra L

    2014-11-21

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein-protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  16. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  17. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  18. Benchmarking in healthcare organizations: an introduction.

    PubMed

    Anderson-Miles, E

    1994-09-01

    Business survival is increasingly difficult in the contemporary world. In order to survive, organizations need a commitment to excellence and a means of measuring that commitment and its results. Benchmarking provides one method for doing this. As the author describes, benchmarking is a performance improvement method that has been used for centuries. Recently, it has begun to be used in the healthcare industry where it has the potential to improve significantly the efficiency, cost-effectiveness, and quality of healthcare services. PMID:10146064

  19. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  20. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  1. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  2. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  3. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  4. SIOExplorer: Opening Archives for Education

    NASA Astrophysics Data System (ADS)

    Miller, S. P.; Staudigl, H.; Johnson, C.; Helly, J.; Day, D.

    2003-04-01

    The SIOExplorer project began with a desire to organize the data archives of the Scripps Institution of Oceanography, which include the observations from 822 cruises over 50 years. Most of the data volume comes from 244 multibeam seafloor swath mapping cruises since 1982. Rather than just create an online archive or a website, the decision was made to build a fully searchable digital library, and to include related historical images and documents from the SIO Archives in the SIO Library. It soon became apparent that much of the material would be appealing to students of all ages, as well as the general public. Access to several global databases was added, along with the seamount catalog and geochemical resources of www.earthref.org. SIOExplorer has now become a part of the National Science Digital Library (www.nsdl.org) and can be accessed directly at http://SIOExplorer.ucsd.edu. From the beginning, it was obvious that a scalable Information Technology architecture would be needed. Data and documents from three separate organizations would need to be integrated initially, with more to follow in subsequent years. Each organization had its own data standards and formats. Almost no metadata existed. With millions of files and approximately 1 terabyte of data, we realized that a team approach would be required, combining the expertise of SIO, the UCSD Libraries and the San Diego Supercomputer Center. General purpose tools have now been developed to automate collection development, create and manage metadata, and geographically search the library. Each digital object in the library has an associated metadata structure, which includes a Dublin Core block along with domain-specific blocks, as needed. Objects can be searched geospatially, temporally, by keyword, and by expert-level. For example, expert-level classification makes it possible to screen out research-grade contents, revealing material appropriate for the selected grade, such as K-6. Now that the library has

  5. Sci-Tech Archives and Manuscript Collections.

    ERIC Educational Resources Information Center

    Mount, Ellis, Ed.

    1989-01-01

    Selected collections of scientific and technical archives and manuscripts described in eight articles include the Edison Archives; American Museum of Natural History Library; MIT (Massachusetts Institute of Technology) Institute Archives and Special Collections; National Archives; Dard Hunter Paper Museum; American Zoo and Aquarium Archives; and…

  6. Benchmarking the QUAD4/TRIA3 element

    NASA Astrophysics Data System (ADS)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-09-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  7. Isprs Benchmark for Multi-Platform Photogrammetry

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  8. Benchmarking the QUAD4/TRIA3 element

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-01-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  9. Nanomagnet Logic: Architectures, design, and benchmarking

    NASA Astrophysics Data System (ADS)

    Kurtz, Steven J.

    Nanomagnet Logic (NML) is an emerging technology being studied as a possible replacement or supplementary device for Complimentary Metal-Oxide-Semiconductor (CMOS) Field-Effect Transistors (FET) by the year 2020. NML devices offer numerous potential advantages including: low energy operation, steady state non-volatility, radiation hardness and a clear path to fabrication and integration with CMOS. However, maintaining both low-energy operation and non-volatility while scaling from the device to the architectural level is non-trivial as (i) nearest neighbor interactions within NML circuits complicate the modeling of ensemble nanomagnet behavior and (ii) the energy intensive clock structures required for re-evaluation and NML's relatively high latency challenge its ability to offer system-level performance wins against other emerging nanotechnologies. Thus, further research efforts are required to model more complex circuits while also identifying circuit design techniques that balance low-energy operation with steady state non-volatility. In addition, further work is needed to design and model low-power on-chip clocks while simultaneously identifying application spaces where NML systems (including clock overhead) offer sufficient energy savings to merit their inclusion in future processors. This dissertation presents research advancing the understanding and modeling of NML at all levels including devices, circuits, and line clock structures while also benchmarking NML against both scaled CMOS and tunneling FETs (TFET) devices. This is accomplished through the development of design tools and methodologies for (i) quantifying both energy and stability in NML circuits and (ii) evaluating line-clocked NML system performance. The application of these newly developed tools improves the understanding of ideal design criteria (i.e., magnet size, clock wire geometry, etc.) for NML architectures. Finally, the system-level performance evaluation tool offers the ability to

  10. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  11. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  12. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  13. WFPDB: European Plate Archives

    NASA Astrophysics Data System (ADS)

    Tsvetkov, Milcho

    2007-08-01

    The Wide-Field Plate Database (WFPDB) gives an inventory of all wide-field (>~ 1 sq. deg) photographic observations archived in astronomical institutions over the world. So it facilitates and stimulates their use and preservation as a valuable source of information for future investigations in astronomy. At present WFPDB manages plate-index information for 25% of all existing plates providing on-line access from Sofia (http://www.skyarchive.org/search) and in CDS, Strasbourg. Here we present the new development of WFPDB as an instrument for searching of long term brightness variations of different sky objects stressing on the European photographic plate collections (from existing 2 million wide-field plates more than 55% are in Europe: Germany, Russia, Ukraine, Italy, Czech Republic, etc.). We comment examples of digitization (with flatbed scanners) of the European plate archives in Sonneberg, Pulkovo, Asiago, Byurakan, Bamberg, etc. and virtual links of WFPDB with European AVO, ADS, IBVS.

  14. Gaia: Processing to Archive

    NASA Astrophysics Data System (ADS)

    O'Mullane, W.; Lammers, U.; Hernandez, J.

    2011-07-01

    Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2012. Its main objective is to perform a stellar census of the 109 brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of μas level accuracy will be constructed. We update the viewer briefly on the status of the Astrometric Global Iterative Solution for Gaia. The results of AGIS feed in to the Main Database (MDB) which will be described here also. All results from Gaia processing in fact are in the MDB which is governed by a strict Interface Control Document (ICD). We describe the Distributed Data Model tool developed for Gaia, the Data Dictionary. Finally we mention public access to Gaia data in the archive. We present current plans and thinking on the archive from the ESA/DPAC perspective.

  15. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  16. Performance Measures, Benchmarking and Value.

    ERIC Educational Resources Information Center

    McGregor, Felicity

    This paper discusses performance measurement in university libraries, based on examples from the University of Wollongong (UoW) in Australia. The introduction highlights the integration of information literacy into the curriculum and the outcomes of a 1998 UoW student satisfaction survey. The first section considers performance indicators in…

  17. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  18. The Yohkoh Legacy Archive

    NASA Astrophysics Data System (ADS)

    Acton, L. W.; Takeda, A.; McKenzie, D. E.

    2008-12-01

    Yohkoh was a Japan/US/UK mission for the study of high energy processes on the sun. Scientific operation extended from September 1991 until 14 December 2001, nearly an entire solar activity cycle. Observations included full-disk soft and hard x-ray imaging, hard x-ray spectroscopy, and high resolution flare spectroscopy in S XV, Ca XIX, Fe XXV and Fe XXVI from the Bent Crystal Spectrometer (BCS). The Yohkoh Legacy Archive (YLA) brings together all Yohkoh observational data along with extensive documentation required for a full understanding of instrumentation, mission operations, and data reduction and correction. Extensive meta-data aid the user in efficiently accessing the data base. Creation of the YLA has been the work of 8 years; the top objective has been to present the extensive Yohkoh database in a form fully usable for scientists or students who are unfamiliar with Yohkoh instrumentation. The YLA may be accessed at http://solar.physics.montana.edu/ylegacy or through the Virtual Solar Observatory (VSO), although the VSO capability is still under development. Data from the Yohkoh hard x-ray instruments and BCS are presented in flare list formats. The Soft X-ray Telescope (SXT) images are available in quantitative and movie formats. This long, uniform, archive of SXT images is especially useful for solar cycle studies as well as high resolution soft x-ray flare studies. Examples of YLA data products and research enabled by the archive will be presented.

  19. Critical experiment data archiving

    SciTech Connect

    Koponen, B.L. ); Clayton, E.D.; Doherty, A.L. )

    1991-01-01

    Critical experiment facilities produced a large number of important data during the past 45 years; however, many useful data remain unpublished. The unpublished material exists in the form of experimenters' logbooks, notes, photographs, material descriptions, etc., This data could be important for computer code validation, understanding the physics of criticality, facility design, or for setting process limits. In the past, criticality specialists have been able to obtain unpublished details by direct contact with the experimenters. Obviously, this will not be possible indefinitely. Most of the US critical experiment facilities are now closed, and the experimenters are moving to other jobs, retiring, or otherwise becoming unavailable for this informal assistance. Also, the records are in danger of being discarded or lost during facility closures, cleanup activities, or in storage. A project was begun in 1989 to ensure that important unpublished data from critical experiment facilities in the United States are archived and made available as a resource of the US Department of Energy's (DOE's) Nuclear Criticality Information System (NCIS). The objective of this paper is to summarize the project accomplishments to date and bring these activities to the attention of those who might be aware of the location of source information needed for archiving and could assist in getting the materials included in the archive.

  20. Lohse's historic plate archive

    NASA Astrophysics Data System (ADS)

    Tsvetkov, M.; Tsvetkova, K.; Richter, G.; Scholz, G.; Böhm, P.

    The description and the analysis of Oswald Lohse's astrophotographic plates, collected at the Astrophysical Observatory Potsdam in the period 1879 - 1889, are presented. 67 plates of the archive, taken with the greatest instrument of the observatory at that time - the refractor (D = 0.30 m, F = 5.40 m, scale = 38''/mm) and with the second heliographic objective (D = 0.13 m, F = 1.36 m, scale = 152''/mm) - - survived two world wars in relative good condition. The plate emulsions are from different manufacturers in the beginning of astrophotography (Gädicke, Schleussner, Beernaert, etc.). The sizes of the plates are usually 9x12 cm2, which corresponds to fields of 1.2deg and 5deg respectively for each instrument mentioned above. The average limiting magnitude is 13.0(pg). Besides of the plates received for technical experiments (work on photographic processes, testing of new instruments and methods of observations), the scientific observations follow programs for studies of planet surfaces, bright stars, some double stars, stellar clusters and nebulous objects. Lohse's archive is included into the Wide Field Plate Database (http://www.skyarchive.org) as the oldest systematic one, covering the fields of Orion (M42/43), Pleiades, h & chi Persei, M37, M3, M11, M13, M92, M31, etc. With the PDS 2020 GM+ microdensitometer of Münster University 10 archive plates were digitized.

  1. Experts discuss how benchmarking improves the healthcare industry. Roundtable discussion.

    PubMed

    Capozzalo, G L; Hlywak, J W; Kenny, B; Krivenko, C A

    1994-09-01

    Healthcare Financial Management engaged four benchmarking experts in a discussion about benchmarking and its role in the healthcare industry. The experts agree that benchmarking by itself does not create change unless it is part of a larger continuous quality improvement program; that benchmarking works best when senior management supports it enthusiastically and when the "appropriate" people are involved; and that benchmarking, when implemented correctly, is one of the best tools available to help healthcare organizations improve their internal processes. PMID:10146069

  2. The COROT Archive at LAEFF

    NASA Astrophysics Data System (ADS)

    Velasco, Almudena; Gutiérrez, Raúl; Solano, Enrique; García-Torres, Miguel; López, Mauro; Sarro, Luis Manuel

    We describe here the main capabilities of the COROT archive. The archive (http://sdc.laeff.inta.es/corotfa/jsp/searchform.jsp), managed at LAEFF in the framework of the Spanish Virtual Observatory (http://svo.laeff.inta.es), has been developed following the standards and requirements defined by IVOA (http://www.ivoa.net). The COROT archive at LAEFF will be publicly available by the end of 2008.

  3. School Culture Benchmarks: Bridges and Barriers to Successful Bullying Prevention Program Implementation

    ERIC Educational Resources Information Center

    Coyle, H. Elizabeth

    2008-01-01

    A substantial body of research indicates that positive school culture benchmarks are integrally tied to the success of school reform and change in general. Additionally, an emerging body of research suggests a similar role for school culture in effective implementation of school violence prevention and intervention efforts. However, little…

  4. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  5. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. PMID:23999329

  6. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  7. SODA: Smart Objects, Dumb Archives

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maly, Kurt; Zubair, Mohammad; Shen, Stewart N. T.

    2004-01-01

    We present the Smart Object, Dumb Archive (SODA) model for digital libraries (DLs). The SODA model transfers functionality traditionally associated with archives to the archived objects themselves. We are exploiting this shift of responsibility to facilitate other DL goals, such as interoperability, object intelligence and mobility, and heterogeneity. Objects in a SODA DL negotiate presentation of content and handle their own terms and conditions. In this paper we present implementations of our smart objects, buckets, and our dumb archive (DA). We discuss the status of buckets and DA and how they are used in a variety of DL projects.

  8. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  9. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  10. Purple Level - 1 Milestone Review Committee I/O and Archive Follow-up Demonstration

    SciTech Connect

    Gary, M R

    2006-12-05

    On July 7th 2006, the Purple Level-1 Review Committee convened and was presented with evidence of the completion of Level-2 Milestone 461 (Deploy First Phase of I/O Infrastructure for Purple) which was performed in direct support of the Purple Level-1 milestone. This evidence included a short presentation and the formal documentation of milestone No.461 (see UCRL-TR-217288). Following the meeting, the Committee asked for the following additional evidence: (1) Set a speed measurement/goal/target assuming a number of files that the user needs to get into the archives. Then redo the benchmark using whatever tool(s) the labs prefer (HTAR, for example). Document how long the process takes. (2) Develop a test to read files back to confirm that what the user gets out of the archive is what the user put into the archive. This evidence has been collected and is presented here.

  11. The Archival Back Burner: Manuscript Collections and the National Archives

    ERIC Educational Resources Information Center

    Purcell, Aaron D.

    2004-01-01

    Greater access to archival materials remains a significant challenge to archivists, librarians, and researchers. In addition to official records documenting governmental activities and agencies, the National Archives and Records Administration (NARA) has significant collections of donated personal papers. Some are processed, some are in the…

  12. Analyzing Archival Intelligence: A Collaboration between Library Instruction and Archives

    ERIC Educational Resources Information Center

    Hensley, Merinda Kaye; Murphy, Benjamin P.; Swain, Ellen D.

    2014-01-01

    Although recent archival scholarship promotes the use of primary sources for developing students' analytical research skills, few studies focus on standards or protocols for teaching or assessing archival instruction. Librarians have designed and tested standards and learning assessment strategies for library instruction, and archivists would do…

  13. Social Media and Archives: A Survey of Archive Users

    ERIC Educational Resources Information Center

    Washburn, Bruce; Eckert, Ellen; Proffitt, Merrilee

    2013-01-01

    In April and May of 2012, the Online Computer Library Center (OCLC) Research conducted a survey of users of archives to learn more about their habits and preferences. In particular, they focused on the roles that social media, recommendations, reviews, and other forms of user-contributed annotation play in archival research. OCLC surveyed faculty,…

  14. HEASARC Software Archive

    NASA Technical Reports Server (NTRS)

    White, Nicholas (Technical Monitor); Murray, Stephen S.

    2003-01-01

    (1) Chandra Archive: SAO has maintained the interfaces through which HEASARC gains access to the Chandra Data Archive. At HEASARC's request, we have implemented an anonymous ftp copy of a major part of the public archive and we keep that archive up-to- date. SAO has participated in the ADEC interoperability working group, establishing guidelines or interoperability standards and prototyping such interfaces. We have provided an NVO-based prototype interface, intending to serve the HEASARC-led NVO demo project. HEASARC's Astrobrowse interface was maintained and updated. In addition, we have participated in design discussions surrounding HEASARC's Caldb project. We have attended the HEASARC Users Group meeting and presented CDA status and developments. (2) Chandra CALDB: SA0 has maintained and expanded the Chandra CALDB by including four new data file types, defining the corresponding CALDB keyword/identification structures. We have provided CALDB upgrades for the public (CIAO) and for Standard Data Processing. Approximately 40 new files have been added to the CALDB in these version releases. There have been in the past year ten of these CALDB upgrades, each with unique index configurations. In addition, with the inputs from software, archive, and calibration scientists, as well as CIAO/SDP software developers, we have defined a generalized expansion of the existing CALDB interface and indexing structure. The purpose of this is to make the CALDB more generally applicable and useful in new and future missions that will be supported archivally by HEASARC. The generalized interface will identify additional configurational keywords and permit more extensive calibration parameter and boundary condition specifications for unique file selection. HEASARC scientists and developers from SAO and GSFC have become involved in this work, which is expected to produce a new interface for general use within the current year. (3) DS9: One of the decisions that came from last year

  15. Benchmark field study of deep neutron penetration

    NASA Astrophysics Data System (ADS)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  16. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  17. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  18. Analysis of ANS LWR physics benchmark problems.

    SciTech Connect

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  19. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  20. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  1. Nuclear data uncertainties by the PWR MOX/UO{sub 2} core rod ejection benchmark

    SciTech Connect

    Pasichnyk, I.; Klein, M.; Velkov, K.; Zwermann, W.; Pautz, A.

    2012-07-01

    Rod ejection transient of the OECD/NEA and U.S. NRC PWR MOX/UO{sub 2} core benchmark is considered under the influence of nuclear data uncertainties. Using the GRS uncertainty and sensitivity software package XSUSA the propagation of the uncertainties in nuclear data up to the transient calculations are considered. A statistically representative set of transient calculations is analyzed and both integral as well as local output quantities are compared with the benchmark results of different participants. It is shown that the uncertainties in nuclear data play a crucial role in the interpretation of the results of the simulation. (authors)

  2. A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking

    NASA Astrophysics Data System (ADS)

    Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes

    We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.

  3. Automating Data Submission to a National Archive

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Chandler, C. L.; Groman, R. C.; Allison, M. D.; Gegg, S. R.; Biological; Chemical Oceanography Data Management Office

    2010-12-01

    In late 2006, the U.S. National Science Foundation (NSF) funded the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) at Woods Hole Oceanographic Institution (WHOI) to work closely with investigators to manage oceanographic data generated from their research projects. One of the final data management tasks is to ensure that the data are permanently archived at the U.S. National Oceanographic Data Center (NODC) or other appropriate national archiving facility. In the past, BCO-DMO submitted data to NODC as an email with attachments including a PDF file (a manually completed metadata record) and one or more data files. This method is no longer feasible given the rate at which data sets are contributed to BCO-DMO. Working with collaborators at NODC, a more streamlined and automated workflow was developed to keep up with the increased volume of data that must be archived at NODC. We will describe our new workflow; a semi-automated approach for contributing data to NODC that includes a Federal Geographic Data Committee (FGDC) compliant Extensible Markup Language (XML) metadata file accompanied by comma-delimited data files. The FGDC XML file is populated from information stored in a MySQL database. A crosswalk described by an Extensible Stylesheet Language Transformation (XSLT) is used to transform the XML formatted MySQL result set to a FGDC compliant XML metadata file. To ensure data integrity, the MD5 algorithm is used to generate a checksum and manifest of the files submitted to NODC for permanent archive. The revised system supports preparation of detailed, standards-compliant metadata that facilitate data sharing and enable accurate reuse of multidisciplinary information. The approach is generic enough to be adapted for use by other data management groups.

  4. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  5. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  6. Benchmarking: implementing the process in practice.

    PubMed

    Stark, Sheila; MacHale, Anita; Lennon, Eileen; Shaw, Lynne

    Government guidance and policy promotes the use of benchmarks as measures against which practice and care can be measured. This provides the motivation for practitioners to make changes to improve patient care. Adopting a systematic approach, practitioners can implement changes in practice quickly. The process requires motivation and communication between professionals of all disciplines. It provides a forum for sharing good practice and developing a support network. In this article the authors outline the initial steps taken by three PCGs in implementing the benchmarking process as they move towards primary care trust status. PMID:12212335

  7. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  8. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  9. Benchmark 4 - Wrinkling during cup drawing

    NASA Astrophysics Data System (ADS)

    Dick, Robert; Cardoso, Rui; Paulino, Mariana; Yoon, Jeong Whan

    2013-12-01

    Benchmark-4 is designed to predict wrinkling during cup drawing. Two different punch geometries have been selected in order to investigate the changes of wrinkling amplitude and wave. To study the effect of material on wrinkling, two distinct materials including AA 5042 and AKDQ steel are also considered in the benchmark. Problem description, material properties, and simulation reports with experimental data are summarized. At the request of the author, and Proceedings Editor, a corrected and updated version of this paper was published on January 2, 2014. The Corrigendum attached to the updated article PDF contains a list of the changes made to the original published version.

  10. Cluster Active Archive: Overview

    NASA Astrophysics Data System (ADS)

    Laakso, H.; Perry, C.; McCaffrey, S.; Herment, D.; Allen, A. J.; Harvey, C. C.; Escoubet, C. P.; Gruenberger, C.; Taylor, M. G. G. T.; Turner, R.

    The four-satellite Cluster mission investigates the small-scale structures and physical processes related to interaction between the solar wind and the magnetospheric plasma. The Cluster Active Archive (CAA) (URL: http://caa.estec.esa.int) will contain the entire set of Cluster high-resolution data and other allied products in a standard format and with a complete set of metadata in machine readable format. The total amount of the data files in compressed format is expected to exceed 50 TB. The data archive is publicly accessible and suitable for science use and publication by the world-wide scientific community. The CAA aims to provide user-friendly services for searching and accessing these data and ancillary products. The CAA became operational in February 2006 and as of Summer 2008 has data from most of the Cluster instruments for at least the first 5 years of operations (2001-2005). The coverage and range of products are being continually improved with more than 200 datasets available from each spacecraft, including high-resolution magnetic and electric DC fields and wave spectra; full three-dimensional electron and ion distribution functions from a few eV to hundreds of keV; and various ancillary and browse products to help with spacecraft and event location. The CAA is continuing to extend and improve the online capabilities of the system and the quality of the existing data. It will add new data files for years 2006-2009 and is preparing for the long-term archive with complete coverage after the completion of the Cluster mission.

  11. Facing growth in the European Nucleotide Archive

    PubMed Central

    Cochrane, Guy; Alako, Blaise; Amid, Clara; Bower, Lawrence; Cerdeño-Tárraga, Ana; Cleland, Iain; Gibson, Richard; Goodgame, Neil; Jang, Mikyung; Kay, Simon; Leinonen, Rasko; Lin, Xiu; Lopez, Rodrigo; McWilliam, Hamish; Oisel, Arnaud; Pakseresht, Nima; Pallreddy, Swapna; Park, Youngmi; Plaister, Sheila; Radhakrishnan, Rajesh; Rivière, Stephane; Rossello, Marc; Senf, Alexander; Silvester, Nicole; Smirnov, Dmitriy; ten Hoopen, Petra; Toribio, Ana; Vaughan, Daniel; Zalunin, Vadim

    2013-01-01

    The European Nucleotide Archive (ENA; http://www.ebi.ac.uk/ena/) collects, maintains and presents comprehensive nucleic acid sequence and related information as part of the permanent public scientific record. Here, we provide brief updates on ENA content developments and major service enhancements in 2012 and describe in more detail two important areas of development and policy that are driven by ongoing growth in sequencing technologies. First, we describe the ENA data warehouse, a resource for which we provide a programmatic entry point to integrated content across the breadth of ENA. Second, we detail our plans for the deployment of CRAM data compression technology in ENA. PMID:23203883

  12. The National Archives Constitution Community.

    ERIC Educational Resources Information Center

    Potter, Lee Ann

    2000-01-01

    In the summer of 1998, education specialists at the National Archives received a grant from the Department of Education to hire nine outstanding classroom teachers to develop lessons and activities based on historical documents that had been digitized by the agency and were available online. Discussion includes exploring the National Archives web…

  13. Teaching Undergraduates to Think Archivally

    ERIC Educational Resources Information Center

    Nimer, Cory L.; Daines, J. Gordon, III

    2012-01-01

    This case study describes efforts in the L. Tom Perry Special Collections to build and teach an undergraduate course to develop archival literacy skills in undergraduate students. The article reviews current models of archival instruction and describes how these were applied in creating the course content. An evaluation of the course's outcomes…

  14. History, Archives, and Information Science.

    ERIC Educational Resources Information Center

    McCrank, Lawrence J.

    1995-01-01

    Discusses trends and issues in the archival, historical, and information sciences. Examines the relationship between history and information science; surveys archives in the context of contemporary issues pervading history and information science. Also discusses concerns common to all three sciences, including technological obsolescence and…

  15. History of hydrology archive

    NASA Astrophysics Data System (ADS)

    Back, William

    There has long been concern over how to archive important material related to the history of hydrology. Bill Back (U.S. Geological Survey, Reston, Va.), past chairman of the AGU Committee on History and Heritage of Hydrology, has made contact with the American Heritage Center, which has been collecting such material for nearly 20 years. They now have an expanding program and are most enthusiastic about helping us preserve historical material. They would like to receive files, manuscripts, photographs, and similar material from hydrologists throughout the United States and other countries.

  16. Community archiving of imaging studies

    NASA Astrophysics Data System (ADS)

    Fritz, Steven L.; Roys, Steven R.; Munjal, Sunita

    1996-05-01

    The quantity of image data created in a large radiology practice has long been a challenge for available archiving technology. Traditional methods ofarchiving the large quantity of films generated in radiology have relied on warehousing in remote sites, with courier delivery of film files for historical comparisons. A digital community archive, accessible via a wide area network, represents a feasible solution to the problem of archiving digital images from a busy practice. In addition, it affords a physician caring for a patient access to imaging studies performed at a variety ofhealthcare institutions without the need to repeat studies. Security problems include both network security issues in the WAN environment and access control for patient, physician and imaging center. The key obstacle to developing a community archive is currently political. Reluctance to participate in a community archive can be reduced by appropriate design of the access mechanisms.

  17. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    SciTech Connect

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M. ); Parma, E.J. ); Ball, R.M.; Hoovler, G.S. )

    1993-01-15

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  18. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    SciTech Connect

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.

    1993-06-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate very good agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  19. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  20. (abstract) Satellite Physical Oceanography Data Available From an EOSDIS Archive

    NASA Technical Reports Server (NTRS)

    Digby, Susan A.; Collins, Donald J.

    1996-01-01

    The Physical Oceanography Distributed Active Archive Center (PO.DAAC) at the Jet Propulsion Laboratory archives and distributes data as part of the Earth Observing System Data and Information System (EOSDIS). Products available from JPL are largely satellite derived and include sea-surface height, surface-wind speed and vectors, integrated water vapor, atmospheric liquid water, sea-surface temperature, heat flux, and in-situ data as it pertains to satellite data. Much of the data is global and spans fourteen years.There is email access, a WWW site, product catalogs, and FTP capabilities. Data is free of charge.

  1. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-05-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  2. Distributed Active Archive Center

    NASA Technical Reports Server (NTRS)

    Bodden, Lee; Pease, Phil; Bedet, Jean-Jacques; Rosen, Wayne

    1993-01-01

    The Goddard Space Flight Center Version 0 Distributed Active Archive Center (GSFC V0 DAAC) is being developed to enhance and improve scientific research and productivity by consolidating access to remote sensor earth science data in the pre-EOS time frame. In cooperation with scientists from the science labs at GSFC, other NASA facilities, universities, and other government agencies, the DAAC will support data acquisition, validation, archive and distribution. The DAAC is being developed in response to EOSDIS Project Functional Requirements as well as from requirements originating from individual science projects such as SeaWiFS, Meteor3/TOMS2, AVHRR Pathfinder, TOVS Pathfinder, and UARS. The GSFC V0 DAAC has begun operational support for the AVHRR Pathfinder (as of April, 1993), TOVS Pathfinder (as of July, 1993) and the UARS (September, 1993) Projects, and is preparing to provide operational support for SeaWiFS (August, 1994) data. The GSFC V0 DAAC has also incorporated the existing data, services, and functionality of the DAAC/Climate, DAAC/Land, and the Coastal Zone Color Scanner (CZCS) Systems.

  3. The Cluster Active Archive

    NASA Astrophysics Data System (ADS)

    Laakso, H.; Perry, C. H.; Escoubet, C. P.; McCaffrey, S.; Herment, D.; Esson, S.; Bowen, H.; Buggy, O.; Taylor, M. G.

    2008-05-01

    The four-satellite Cluster mission investigates small-scale structures (in three dimensions) of the Earth's plasma environment, such as those involved in the interaction between the solar wind and the magnetospheric plasma, in global magnetotail dynamics, in cross-tail currents, and in the formation and dynamics of the neutral line and of plasmoids. The Cluster Active Archive CAA (http://caa.estec.esa.int/) will contain the entire set of Cluster high resolution data and other allied products in a standard format and with a complete set of metadata in machine readable form. The data archived are (1) publicly accessible, (2) of the best quality achievable with the given resources, and (3) suitable for science use and publication by both the Cluster and broader scientific community. The CAA to provide user friendly services for searching and accessing these data, e.g. users can save and restore their selections speeding up similar requests. The CAA is continuing to extend and improve the online capabilities of the system, e.g., the CAA products can be downloaded either via a web interface or a machine accessible interface.

  4. Multimedia medical archiving system

    NASA Astrophysics Data System (ADS)

    Sood, Arun K.; Atallah, George C.; Rao, Amar; Perez-Lopez, Kathleen G.; Freedman, Matthew T.

    1995-11-01

    The demand for digital radiological imaging and archiving applications has been increasingly rapidly. These digital applications offer significant advantages to the physician over the traditional film-based technique. They result in faster and better quality services, support remote access and conferencing capabilities, provide on demand service availability, eliminate film processing costs, and most significantly, they are suitable services for the evolving global information super highway. Several existing medical multimedia systems incorporate and utilize those advanced technical features. However, radiologists are seeking an order of magnitude improvement in the overall current system design and performance indices (such as transactions response times, system utilization and throughput). One of the main technical concern radiologists are raising is the miss-filing occurrence. This even will decrease the radiologist productivity; introduce unnecessarily workload; and will result in total customer dissatisfaction. This paper presents Multimedia Medical Archiving System, which can be used in hospitals and medical centers for storing and retrieving radiological images. Furthermore, this paper emphasizes a viable solution for the miss-filing problem. The results obtained demonstrate and quantify the improvement in the overall radiological operations. Specifically this paper demonstrates an order of 80% improvement in the response time for retrieving images. This enhancement in system performance directly translates to a tremendous improvement in the radiologist's productivity.

  5. NAS Parallel Benchmarks Results 3-95

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Walter, Howard (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion, i.e., the complete details of the problem are given in a NAS technical document. Except for a few restrictions, benchmark implementors are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: CRAY C90, CRAY T90 and Fujitsu VPP500; (b) Highly Parallel Processors: CRAY T3D, IBM SP2-WN (Wide Nodes), and IBM SP2-TN2 (Thin Nodes 2); and (c) Symmetric Multiprocessors: Convex Exemplar SPPIOOO, CRAY J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL (75 MHz). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention future NAS plans for the NPB.

  6. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  7. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  8. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  9. Benchmarking: A New Approach to Space Planning.

    ERIC Educational Resources Information Center

    Fink, Ira

    1999-01-01

    Questions some fundamental assumptions of historical methods of space guidelines in college facility planning, and offers an alternative approach to space projections based on a new benchmarking method. The method, currently in use at several institutions, uses space per faculty member as the basis for prediction of need and space allocation. (MSE)

  10. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  11. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  12. Sequenced Benchmarks for K-8 Science.

    ERIC Educational Resources Information Center

    Kendall, John S.; DeFrees, Keri L.; Richardson, Amy

    This document describes science benchmarks for grades K-8 in Earth and Space Science, Life Science, and Physical Science. Each subject area is divided into topics followed by a short content description and grade level information. Source documents for this paper included science content guides from California, Ohio, South Carolina, and South…

  13. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  14. Closed benchmarks for network community structure characterization

    NASA Astrophysics Data System (ADS)

    Aldecoa, Rodrigo; Marín, Ignacio

    2012-02-01

    Characterizing the community structure of complex networks is a key challenge in many scientific fields. Very diverse algorithms and methods have been proposed to this end, many working reasonably well in specific situations. However, no consensus has emerged on which of these methods is the best to use in practice. In part, this is due to the fact that testing their performance requires the generation of a comprehensive, standard set of synthetic benchmarks, a goal not yet fully achieved. Here, we present a type of benchmark that we call “closed,” in which an initial network of known community structure is progressively converted into a second network whose communities are also known. This approach differs from all previously published ones, in which networks evolve toward randomness. The use of this type of benchmark allows us to monitor the transformation of the community structure of a network. Moreover, we can predict the optimal behavior of the variation of information, a measure of the quality of the partitions obtained, at any moment of the process. This enables us in many cases to determine the best partition among those suggested by different algorithms. Also, since any network can be used as a starting point, extensive studies and comparisons can be performed using a heterogeneous set of structures, including random ones. These properties make our benchmarks a general standard for comparing community detection algorithms.

  15. Benchmark graphs for testing community detection algorithms

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  16. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  17. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  18. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  19. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  20. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  1. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy

    PubMed Central

    Wheeler, Quentin; Bourgoin, Thierry; Coddington, Jonathan; Gostony, Timothy; Hamilton, Andrew; Larimer, Roy; Polaszek, Andrew; Schauff, Michael; Solis, M. Alma

    2012-01-01

    Abstract Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1) modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types) for all newly described species to avoid adding to backlog; (2) an “r” strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3) a “K” strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4) creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the “K” strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT) specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the

  2. Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2009-11-01

    A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.

  3. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGESBeta

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; Hargrove, Paul; Jin, Haoqiang; Fuerlinger, Karl; Koniges, Alice; Wright, Nicholas J.

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  4. CFD validation in OECD/NEA t-junction benchmark.

    SciTech Connect

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E.

    2011-08-23

    benchmark data. The numerical scheme has a very small scheme diffusion and is the second and the first order accurate in space and time, correspondingly. We compare and contrast simulation results for three computational fluid dynamics codes CABARET, Conv3D, and Nek5000 for the T-junction thermal striping problem that was the focus of a recent OECD/NEA blind benchmark. The corresponding codes utilize finite-difference implicit large eddy simulation (ILES), finite-volume LES on fully staggered grids, and an LES spectral element method (SEM), respectively. The simulations results are in a good agreement with experimenatl data. We present results from a study of sensitivity to computational mesh and time integration interval, and discuss the next steps in the simulation of this problem.

  5. (Per)Forming Archival Research Methodologies

    ERIC Educational Resources Information Center

    Gaillet, Lynee Lewis

    2012-01-01

    This article raises multiple issues associated with archival research methodologies and methods. Based on a survey of recent scholarship and interviews with experienced archival researchers, this overview of the current status of archival research both complicates traditional conceptions of archival investigation and encourages scholars to adopt…

  6. Ethics and Truth in Archival Research

    ERIC Educational Resources Information Center

    Tesar, Marek

    2015-01-01

    The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…

  7. Archives and Automation: Issues and Trends.

    ERIC Educational Resources Information Center

    Weiner, Rob

    This paper focuses on archives and automation, and reviews recent literature on various topics concerning archives and automation. Topics include: resistance to technology and the need to educate about automation; the change in archival theory due to the information age; problems with technology use; the history of organizing archival records…

  8. HEASARC - The High Energy Astrophysics Science Archive Research Center

    NASA Technical Reports Server (NTRS)

    Smale, Alan P.

    2011-01-01

    The High Energy Astrophysics Science Archive Research Center (HEASARC) is NASA's archive for high-energy astrophysics and cosmic microwave background (CMB) data, supporting the broad science goals of NASA's Physics of the Cosmos theme. It provides vital scientific infrastructure to the community by standardizing science data formats and analysis programs, providing open access to NASA resources, and implementing powerful archive interfaces. Over the next five years the HEASARC will ingest observations from up to 12 operating missions, while serving data from these and over 30 archival missions to the community. The HEASARC archive presently contains over 37 TB of data, and will contain over 60 TB by the end of 2014. The HEASARC continues to secure major cost savings for NASA missions, providing a reusable mission-independent framework for reducing, analyzing, and archiving data. This approach was recognized in the NRC Portals to the Universe report (2007) as one of the HEASARC's great strengths. This poster describes the past and current activities of the HEASARC and our anticipated developments in coming years. These include preparations to support upcoming high energy missions (NuSTAR, Astro-H, GEMS) and ground-based and sub-orbital CMB experiments, as well as continued support of missions currently operating (Chandra, Fermi, RXTE, Suzaku, Swift, XMM-Newton and INTEGRAL). In 2012 the HEASARC (which now includes LAMBDA) will support the final nine-year WMAP data release. The HEASARC is also upgrading its archive querying and retrieval software with the new Xamin system in early release - and building on opportunities afforded by the growth of the Virtual Observatory and recent developments in virtual environments and cloud computing.

  9. Seamless Synthetic Aperture Radar Archive for Interferometry Analysis

    NASA Astrophysics Data System (ADS)

    Baker, S.; Meertens, C. M.; Phillips, D. A.; Crosby, C.; Fielding, E. J.; Nicoll, J.; Bryson, G.; Buechler, B.; Baru, C.

    2012-12-01

    The NASA Advancing Collaborative Connections for Earth System Science (ACCESS) Seamless Synthetic Aperture Radar (SAR) Archive (SSARA) project is a 2-year collaboration between UNAVCO/WInSAR, the Alaska Satellite Facility (ASF), the Jet Propulsion Laboratory (JPL), and the San Diego Supercomputer Center (SDSC) to design and implement a seamless distributed access system for SAR data and derived data products (i.e. terrain corrected interferograms). A seamless SAR archive increases the accessibility and the utility of SAR science data to solid Earth and cryospheric science researchers. Building on the established webs services and APIs at UNAVCO and ASF, the SSARA project will provide simple web services tools to seamlessly and effectively exchange and share space- and airborne SAR metadata, archived SAR data, and on-demand derived products between the distributed archives and individual users. Development of standard formats for data products and new QC/QA definitions will be implemented to streamline data usage and enable advanced query capabilities. The new ACCESS-developed tools will help overcome the obstacles of heterogeneous archive access protocols and data formats, data provider access policy constraints, and will also enable interoperability with key information technology development systems such as the NASA/JPL QuakeSim and ARIA projects, which provide higher level resources for geodetic data processing, data assimilation and modeling, and integrative analysis for scientific research and hazards applications. The SSARA project will significantly enhance mature IT capabilities at ASF's NASA-supported DAAC, the GEO Supersites archive, supported operationally by UNAVCO, and UNAVCO's WInSAR and EarthScope SAR archives that are supported by NASA, NSF, and the USGS in close collaboration with ESA/ESRIN.

  10. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  11. [Individual archiving of digital angiocardiograms].

    PubMed

    Stiel, G M; Nienaber, C A; Schaps, K P; Meinertz, T

    1996-08-01

    CD-R will be introduced internationally as a standardized individual archive and exchange medium allowing individual solutions for long-term archiving in a catheterization laboratory. The concept of digital archiving on two CD-R includes a long-term primary basic archive and a secondary one edited by intelligent (medical) data reduction (IDR). The basic archive is automatically composed by a background process consisting of unprocessed images or image series and is fundamental for further transfers, storage, presentations and additional studies. The digital working archive comprises a set of images and image series edited by IDR, as well as the results of morphometric studies as well as identification and documentation data. IDR is based upon the elimination of useless and redundant images series, documentation of coronary interventions on one single representative image and on the reduction of relevant images series and physiological data into an ECG-controlled representative cardiac cycle. IDR edits a redundancy-free set of 130 images (diagnostic study) or only 85 images of an interventional study. Two cardiologists and two cardiosurgeons independently studied 24 IDR-edited angiograms and the corresponding unedited digital angiograms and found no significant differences in the diagnostically relevant coronary morphology and left ventricular function. This study shows that an edited angiogram may not only serve for digital archiving but also form the basis for further evaluation or copies. PMID:8975495

  12. Toxicological benchmarks for screening contaminants of potential concern for effects on sediment-associated biota: 1994 Revision. Environmental Restoration Program

    SciTech Connect

    Hull, R.N. |; Suter, G.W. II

    1994-06-01

    Because a hazardous waste site may contain hundreds of chemicals, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a Screening Assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, more analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. This report briefly describes three categories of approaches to the development of sediment quality benchmarks. These approaches are based on analytical chemistry, toxicity test and field survey data. A fourth integrative approach incorporates all three types of data. The equilibrium partitioning approach is recommended for screening nonpolar organic contaminants of concern in sediments. For inorganics, the National Oceanic and Atmospheric Administration has developed benchmarks that may be used for screening. There are supplemental benchmarks from the province of Ontario, the state of Wisconsin, and US Environmental Protection Agency Region V. Pore water analysis is recommended for polar organic compounds; comparisons are then made against water quality benchmarks. This report is an update of a prior report. It contains revised ER-L and ER-M values, the five EPA proposed sediment quality criteria, and benchmarks calculated for several nonionic organic chemicals using equilibrium partitioning.

  13. National Geophysical Data Center Tsunami Data Archive

    NASA Astrophysics Data System (ADS)

    Stroker, K. J.; Dunbar, P. K.; Brocko, R.

    2008-12-01

    NOAA's National Geophysical Data Center (NGDC) and co-located World Data Center for Geophysics and Marine Geology long-term tsunami data archive provides data and derived products essential for tsunami hazard assessment, forecast and warning, inundation modeling, preparedness, mitigation, education, and research. As a result of NOAA's efforts to strengthen its tsunami activities, the long-term tsunami data archive has grown from less than 5 gigabyte in 2004 to more than 2 terabytes in 2008. The types of data archived for tsunami research and operation activities have also expanded in fulfillment of the P.L. 109-424. The archive now consists of: global historical tsunami, significant earthquake and significant volcanic eruptions database; global tsunami deposits and proxies database; reference database; damage photos; coastal water-level data (i.e. digital tide gauge data and marigrams on microfiche); bottom pressure recorder (BPR) data as collected by Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys. The tsunami data archive comes from a wide variety of data providers and sources. These include the NOAA Tsunami Warning Centers, NOAA National Data Buoy Center, NOAA National Ocean Service, IOC/NOAA International Tsunami Information Center, NOAA Pacific Marine Environmental Laboratory, U.S. Geological Survey, tsunami catalogs, reconnaissance reports, journal articles, newspaper articles, internet web pages, and email. NGDC has been active in the management of some of these data for more than 50 years while other data management efforts are more recent. These data are openly available, either directly on-line or by contacting NGDC. All of the NGDC tsunami and related databases are stored in a relational database management system. These data are accessible over the Web as tables, reports, and interactive maps. The maps provide integrated web-based GIS access to individual GIS layers including tsunami sources, tsunami effects, significant earthquakes

  14. Science Archives at the ESAC Science Data Centre

    NASA Astrophysics Data System (ADS)

    Arviset, Christophe

    2015-12-01

    The ESAC Science Data Centre (ESDC) provides services and tools to access and retrieve science data from all ESA space science missions (astronomy, planetary and solar heliospheric). The ESDC consists of a team of scientists and engineers working together and in very close collaboration with Science Ground Segments teams. The large set of science archives located at ESAC represent a major research asset for the community, as well as a unique opportunity to provide multi missions and multi wavelength science exploitation services. ESAC Science Archives long term strategy is set along the main three axes: (1) enable maximum scientific exploitation of data sets; (2) enable efficient long-term preservation of data, software and knowledge, using modern technology and, (3) enable cost-effective archive production by integration in, and across, projects The author wants to thanks all the people from the ESAC Science Data Centre and the mission archive scientists who have participated to the development of the archives and services presented in this paper.

  15. Archive & Data Management Activities for ISRO Science Archives

    NASA Astrophysics Data System (ADS)

    Thakkar, Navita; Moorthi, Manthira; Gopala Krishna, Barla; Prashar, Ajay; Srinivasan, T. P.

    2012-07-01

    ISRO has kept a step ahead by extending remote sensing missions to planetary and astronomical exploration. It has started with Chandrayaan-1 and successfully completed the moon imaging during its life time in the orbit. Now, in future ISRO is planning to launch Chandrayaan-2 (next moon mission), Mars Mission and Astronomical mission ASTROSAT. All these missions are characterized by the need to receive process, archive and disseminate the acquired science data to the user community for analysis and scientific use. All these science missions will last for a few months to a few years but the data received are required to be archived, interoperable and requires a seamless access to the user community for the future. ISRO has laid out definite plans to archive these data sets in specified standards and develop relevant access tools to be able to serve the user community. To achieve this goal, a Data Center is set up at Bangalore called Indian Space Science Data Center (ISSDC). This is the custodian of all the data sets of the current and future science missions of ISRO . Chandrayaan-1 is the first among the planetary missions launched/to be launched by ISRO and we had taken the challenge and developed a system for data archival and dissemination of the payload data received. For Chandrayaan-1 the data collected from all the instruments are processed and is archived in the archive layer in the Planetary Data System (PDS 3.0) standards, through the automated pipeline. But the dataset once stored is of no use unless it is made public, which requires a Web-based dissemination system that can be accessible to all the planetary scientists/data users working in this field. Towards this, a Web- based Browse and Dissemination system has been developed, wherein users can register and search for their area of Interest and view the data archived for TMC & HYSI with relevant Browse chips and Metadata of the data. Users can also order the data and get it on their desktop in the PDS

  16. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. The benchmark for each cost measure is the national mean of the performance rates calculated among all groups...

  17. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. (a) For the CY 2015 payment adjustment period, the benchmark for each cost measure is the national mean of...

  18. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    ERIC Educational Resources Information Center

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  19. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  20. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  1. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  2. Will Today's Electronic Journals Be Accessible in the 23rd Century: Issues in Long-Term Archiving (SIG STI, IFP)

    ERIC Educational Resources Information Center

    Lippert, Margaret

    2000-01-01

    This abstract of a planned session on access to scientific and technical journals addresses policy and standard issues related to long-term archives; digital archiving models; economic factors; hardware and software issues; multi-publisher electronic journal content integration; format considerations; and future data migration needs. (LRW)

  3. Benchmarking ICRF Full-wave Solvers for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R. J. Dumont, A. Fukuyama, R. Harvey, E. F. Jaeger, K. Indireshkumar, E. Lerche, D. McCune, C. K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2011-01-06

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by six full-wave solver groups to simulate the ICRF electromagnetic fields and heating, and by three of these groups to simulate the current-drive. Approximate agreement is achieved for the predicted heating power for the DT and He4 cases. Factor of two disagreements are found for the cases with second harmonic He3 heating in bulk H cases. Approximate agreement is achieved simulating the ICRF current drive.

  4. Benchmark initiative on coupled multiphase flow and geomechanical processes during CO2 injection

    NASA Astrophysics Data System (ADS)

    Benisch, K.; Annewandter, R.; Olden, P.; Mackay, E.; Bauer, S.; Geiger, S.

    2012-12-01

    CO2 injection into deep saline aquifers involves multiple strongly interacting processes such as multiphase flow and geomechanical deformation, which threat to the seal integrity of CO2 repositories. Coupled simulation codes are required to establish realistic prognoses of the coupled process during CO2 injection operations. International benchmark initiatives help to evaluate, to compare and to validate coupled simulation results. However, there is no published code comparison study so far focusing on the impact of coupled multiphase flow and geomechanics on the long-term integrity of repositories, which is required to obtain confidence in the predictive capabilities of reservoir simulators. We address this gap by proposing a benchmark study. A wide participation from academic and industrial institutions is sought, as the aim of building confidence in coupled simulators become more plausible with many participants. Most published benchmark studies on coupled multiphase flow and geomechanical processes have been performed within the field of nuclear waste disposal (e.g. the DECOVALEX project), using single-phase formulation only. As regards CO2 injection scenarios, international benchmark studies have been published comparing isothermal and non-isothermal multiphase flow processes such as the code intercomparison by LBNL, the Stuttgart Benchmark study, the CLEAN benchmark approach and other initiatives. Recently, several codes have been developed or extended to simulate the coupling of hydraulic and geomechanical processes (OpenGeoSys, ELIPSE-Visage, GEM, DuMuX and others), which now enables a comprehensive code comparison. We propose four benchmark tests of increasing complexity, addressing the coupling between multiphase flow and geomechanical processes during CO2 injection. In the first case, a horizontal non-faulted 2D model consisting of one reservoir and one cap rock is considered, focusing on stress and strain regime changes in the storage formation and the

  5. NOAA Enterprise Archive Access Tool

    NASA Astrophysics Data System (ADS)

    Rank, R. H.; McCormick, S.; Cremidis, C.

    2010-12-01

    A challenge for any consumer of National Oceanic and Atmospheric Administration (NOAA) environmental data archives is that the disparate nature of these archives makes it difficult for consumers to access data in a unified manner. If it were possible for consumers to have seamless access to these archives, they would be able to better utilize the data and thus maximize the return on investment for NOAA’s archival program. When unified data access is coupled with sophisticated data querying and discovery techniques, it will be possible to provide consumers with access to richer data sets and services that extend the use of key NOAA data. Theoretically, there are two ways that unified archive access may be achieved. The first approach is to develop a single archive or archiving standard that would replace the current NOAA archives. However, the development of such an archive would pose significant technical and administrative challenges. The second approach is to develop a middleware application that would provide seamless access to all existing archives, in effect allowing each archive to exist “as is” but providing a translation service for the consumer. This approach is deemed more feasible from an administrative and technical standpoint; however, it still presents unique technical challenges due to the disparate architectures that exist across NOAA archives. NOAA has begun developing the NEAAT. The purpose of NEAAT is to provide a middleware and a simple standardized API between NOAA archives and data consumers. It is important to note that NEAAT serves two main purposes: 1) To provide a single application programming interface (API) that enables designated consumers to write their own custom applications capable of searching and acquiring data seamlessly from multiple NOAA archives. 2) To allow archive managers to expose their data to consumers in conjunction with other NOAA resources without modifying their archiving systems or way of presenting data

  6. The Hubble Spectroscopic Legacy Archive

    NASA Astrophysics Data System (ADS)

    Peeples, Molly S.; Tumlinson, Jason; Fox, Andrew; Aloisi, Alessandra; Ayres, Thomas R.; Danforth, Charles; Fleming, Scott W.; Jenkins, Edward B.; Jedrzejewski, Robert I.; Keeney, Brian A.; Oliveira, Cristina M.

    2016-01-01

    With no future space ultraviolet instruments currently planned, the data from the UV spectrographs aboard the Hubble Space Telescope have a legacy value beyond their initial science goals. The Hubble Spectroscopic Legacy Archive will provide to the community new science-grade combined spectra for all publicly available data obtained by the Cosmic Origins Spectrograph (COS) and the Space Telescope Imaging Spectrograph (STIS). These data will be packaged into "smart archives" according to target type and scientific themes to facilitate the construction of archival samples for common science uses. A new "quick look" capability will make the data easy for users to quickly access, assess the quality of, and download for archival science starting in Cycle 24, with the first generation of these products for the FUV modes of COS available online via MAST in early 2016.

  7. Small Data Archives and Libraries

    NASA Astrophysics Data System (ADS)

    Holl, A.

    2010-10-01

    Preservation is important for documenting original observations, and existing data are an important resource which can be re-used. Observatories should set up electronic data archives and formulate archiving policies. VO (Virtual Observatory) compliance is desirable; even if this is not possible, at least some VO ideas should be applied. Data archives should be visible and their data kept on-line. Metadata should be plentiful, and as standard as possible, just like file formats. Literature and data should be cross-linked. Libraries can play an important role in this process. In this paper, we discuss data archiving for small projects and observatories. We review the questions of digitization, cost factors, manpower, organizational structure and more.

  8. A path to filled archives

    NASA Astrophysics Data System (ADS)

    Fleischer, Dirk; Jannaschk, Kai

    2011-09-01

    Reluctance to deposit data is rife among researchers, despite broad agreement on the principle of data sharing. More and better information will reach hitherto empty archives, if professional support is given during data creation, not in a project's final phase.

  9. OCTALIS benchmarking: comparison of four watermarking techniques

    NASA Astrophysics Data System (ADS)

    Piron, Laurent; Arnold, Michael; Kutter, Martin; Funk, Wolfgang; Boucqueau, Jean M.; Craven, Fiona

    1999-04-01

    In this paper, benchmarking results of watermarking techniques are presented. The benchmark includes evaluation of the watermark robustness and the subjective visual image quality. Four different algorithms are compared, and exhaustively tested. One goal of these tests is to evaluate the feasibility of a Common Functional Model (CFM) developed in the European Project OCTALIS and determine parameters of this model, such as the length of one watermark. This model solves the problem of image trading over an insecure network, such as Internet, and employs hybrid watermarking. Another goal is to evaluate the resistance of the watermarking techniques when subjected to a set of attacks. Results show that the tested techniques do not have the same behavior and that no tested methods has optimal characteristics. A last conclusion is that, as for the evaluation of compression techniques, clear guidelines are necessary to evaluate and compare watermarking techniques.

  10. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  11. Benchmark West Texas Intermediate crude assayed

    SciTech Connect

    Rhodes, A.K.

    1994-08-15

    The paper gives an assay of West Texas Intermediate, one of the world's market crudes. The price of this crude, known as WTI, is followed by market analysts, investors, traders, and industry managers around the world. WTI price is used as a benchmark for pricing all other US crude oils. The 41[degree] API < 0.34 wt % sulfur crude is gathered in West Texas and moved to Cushing, Okla., for distribution. The WTI posted prices is the price paid for the crude at the wellhead in West Texas and is the true benchmark on which other US crudes are priced. The spot price is the negotiated price for short-term trades of the crude. And the New York Mercantile Exchange, or Nymex, price is a futures price for barrels delivered at Cushing.

  12. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  13. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  14. TsunaFLASH Benchmark and Its Verifications

    NASA Astrophysics Data System (ADS)

    Pranowo, Widodo; Behrens, Joern

    2010-05-01

    In the end of year 2008 TsunAWI (Tsunami unstructured mesh finite element model developed at Alfred Wegener Institute) by Behrens et al. (2006 - 2008) [Behrens, 2008], had been launched as an operational model in the German - Indonesian Tsunami EarlyWarning System (GITEWS) framework. This model has been benchmarked and verified with 2004 Sumatra-Andaman mega tsunami event [Harig et al., 2008]. A new development uses adaptive mesh refinement to improve computational efficiency and accuracy, this approach is called TsunaFLASH [Pranowo et al., 2008]. After the initial development and verification phase with stabilization efforts, and study of refinement criteria, the code is now mature enough to be validated with data. This presentation will demonstrate results of TsunaFLASH for the experiments with diverse mesh refinement criteria, and benchmarks; in particular the problem set-1 of IWLRM, and field data of the Sumatra-Andaman 2004 event.

  15. Architecture and evolution of Goddard Space Flight Center Distributed Active Archive Center

    NASA Technical Reports Server (NTRS)

    Bedet, Jean-Jacques; Bodden, Lee; Rosen, Wayne; Sherman, Mark; Pease, Phil

    1994-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been developed to enhance Earth Science research by improved access to remote sensor earth science data. Building and operating an archive, even one of a moderate size (a few Terabytes), is a challenging task. One of the critical components of this system is Unitree, the Hierarchical File Storage Management System. Unitree, selected two years ago as the best available solution, requires constant system administrative support. It is not always suitable as an archive and distribution data center, and has moderate performance. The Data Archive and Distribution System (DADS) software developed to monitor, manage, and automate the ingestion, archive, and distribution functions turned out to be more challenging than anticipated. Having the software and tools is not sufficient to succeed. Human interaction within the system must be fully understood to improve efficiency to improve efficiency and ensure that the right tools are developed. One of the lessons learned is that the operability, reliability, and performance aspects should be thoroughly addressed in the initial design. However, the GSFC DAAC has demonstrated that it is capable of distributing over 40 GB per day. A backup system to archive a second copy of all data ingested is under development. This backup system will be used not only for disaster recovery but will also replace the main archive when it is unavailable during maintenance or hardware replacement. The GSFC DAAC has put a strong emphasis on quality at all level of its organization. A Quality team has also been formed to identify quality issues and to propose improvements. The DAAC has conducted numerous tests to benchmark the performance of the system. These tests proved to be extremely useful in identifying bottlenecks and deficiencies in operational procedures.

  16. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  17. Reactor calculation benchmark PCA blind test results

    SciTech Connect

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  18. Experimental Benchmarking of the Magnetized Friction Force

    SciTech Connect

    Fedotov, A. V.; Litvinenko, V. N.; Galnander, B.; Lofnes, T.; Ziemann, V.; Sidorin, A. O.; Smirnov, A. V.

    2006-03-20

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  19. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  20. ACRF Archive User Meeting Summary

    SciTech Connect

    SA Edgerton; RA McCord; DP Kaiser

    2007-10-30

    On October 30, 2007, the U.S. Department of Energy’s (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) sponsored an all-day workshop to assess the status of the ACRF Archive. Focus areas included usability of current functions, plans for revised functions, proposals for new functions, and an overarching discussion of new ideas. Although 13 scientists familiar with ACRF and the ARM Program were invited to the workshop, only 10 scientists were available to attend the workshop. ACRF consists of the infrastructure that was developed to support the ARM Program and includes the ACRF Archive (previously called the ARM Archive). The scientists who participated in the meeting ranged from those who used the Archive frequently to those who seldom or never had accessed the Archive. The group was spread across disciplines, i.e. modelers, conservationists, and others from universities and government laboratories. A few of the participants were funded by the ARM Program, but most were not funded currently by ARM. During the past year, several improvements were made to the ACRF Archive to link it with the ARM/ACRF web pages, add a shopping cart feature, and expand on search parameters. Additional modifications have been proposed and prototypes of these proposals were made available for the participants. The participants were given several exercises to do before the meeting, and their feedback was requested to help identify potential problems and shortcomings with the existing structure and to recommend improvements.

  1. Benchmarking and accounting for the (private) cloud

    NASA Astrophysics Data System (ADS)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  2. Benchmarking numerical freeze/thaw models

    NASA Astrophysics Data System (ADS)

    Rühaak, Wolfram; Anbergen, Hauke; Molson, John; Grenier, Christophe; Sass, Ingo

    2015-04-01

    The modeling of freezing and thawing of water in porous media is of increasing interest, and for which very different application areas exist. For instance, the modeling of permafrost regression with respect to climate change issues is one area, while others include geotechnical applications in tunneling and for borehole heat exchangers which operate at temperatures below the freezing point. The modeling of these processes requires the solution of a coupled non-linear system of partial differential equations for flow and heat transport in space and time. Different code implementations have been developed in the past. Analytical solutions exist only for simple cases. Consequently, an interest has arisen in benchmarking different codes with analytical solutions, experiments and purely numerical results, similar to the long-standing DECOVALEX and the more recent "Geothermal Code Comparison" activities. The name for this freezing/ thawing benchmark consortium is INTERFROST. In addition to the well-known so-called Lunardini solution for a 1D case (case T1), two different 2D problems will be presented, one which represents melting of a frozen inclusion (case TH2) and another which represents the growth or thaw of permafrost around a talik (case TH3). These talik regions are important for controlling groundwater movement within a mainly frozen ground. First results of the different benchmark results will be shown and discussed.

  3. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  4. Digital video data archive for crash test systems

    NASA Astrophysics Data System (ADS)

    Hock, Christian

    1997-04-01

    Kayser-Threde has been invested many years in developing technology used in crash testing, data acquisition and test data archiving. Since 1976 the department Measurement Systems has ben supplying European car manufacturers and test houses with ruggedized on-board data acquisition units for use in safety tests according to SAE J 211. The integration of on-board high-speed digital cameras has completed the data acquisition unit. Stationary high-speed cameras for external observation are also included in the controlling and acquisition system of the crash test site. The occupation of Kayser-Threde's department High Speed Data Systems is the design and integration of computerized data flow systems under real-time conditions. The special circumstances of crash test applications are taken into account for data acquisition, mass storage and data distribution. The two fundamental components of the video data archiving systems are, firstly, the recording of digital high-speed images as well as digital test data and secondly, an organized filing in mass archiving systems with the capability of near on-line access. In combination with sophisticated and reliable hardware components Kayser-Threde is able to deliver high performance digital data archives with storage capacities of up to 2600 TeraBytes.

  5. Quaternary fluvial archives: achievements of the Fluvial Archives Group

    NASA Astrophysics Data System (ADS)

    Bridgland, David; Cordier, Stephane; Herget, Juergen; Mather, Ann; Vandenberghe, Jef; Maddy, Darrel

    2013-04-01

    In their geomorphological and sedimentary records, rivers provide valuable archives of environments and environmental change, at local to global scales. In particular, fluvial sediments represent databanks of palaeoenvironment and palaeoclimatic (for example) of fossils (micro- and macro-), sedimentary and post-depositional features and buried soils. Well-dated sequences are of the most value, with dating provided by a wide range of methods, from radiometric (numerical) techniques to included fossils (biostratigraphy) and/or archaeological material. Thus Quaternary fluvial archives can also provide important data for studies of Quaternary biotic evolution and early human occupation. In addition, the physical disposition of fluvial sequences, be it as fragmented terrace remnants or as stacked basin-fills, provides valuable information about geomorphological and crustal evolution. Since rivers are long-term persistent features in the landscape, their sedimentary archives can represent important frameworks for regional Quaternary stratigraphy. Fluvial archives are distributed globally, being represented on all continents and across all climatic zones, with the exception of the frozen polar regions and the driest deserts. In 1999 the Fluvial Archives Group (FLAG) was established, as a working group of the Quaternary Research Association (UK), aimed at bringing together those interested in such archives. This has evolved into an informal organization that has held regular biennial combined conference and field-trip meetings, has co-sponsored other meetings and conference sessions, and has presided over two International Geoscience Programme (IGCP) projects: IGCP 449 (2000-2004) 'Global Correlation of Late Cenozoic Fluvial Deposits' and IGCP 518 (2005-2007) 'Fluvial sequences as evidence for landscape and climatic evolution in the Late Cenozoic'. Through these various activities a sequence of FLAG publications has appeared, including special issues in a variety of

  6. Earth observation archive activities at DRA Farnborough

    NASA Technical Reports Server (NTRS)

    Palmer, M. D.; Williams, J. M.

    1993-01-01

    Space Sector, Defence Research Agency (DRA), Farnborough have been actively involved in the acquisition and processing of Earth Observation data for over 15 years. During that time an archive of over 20,000 items has been built up. This paper describes the major archive activities, including: operation and maintenance of the main DRA Archive, the development of a prototype Optical Disc Archive System (ODAS), the catalog systems in use at DRA, the UK Processing and Archive Facility for ERS-1 data, and future plans for archiving activities.

  7. The Operation and Architecture of the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Gelino, C. R.; Laity, A.; Kong, M.; Swain, M.; Holt, J.; Goodrich, R.; Mader, J.; Tran, H. D.

    2014-05-01

    The Infrared Processing and Analysis Center (IPAC) and the W. M. Keck Observatory (WMKO) are collaborating to build an archive for the twin 10-m Keck Telescopes, located near the summit of Mauna Kea. The Keck Observatory Archive (KOA) takes advantage of IPAC's long experience with managing and archiving large and complex data sets from active missions and serving them to the community; and of the Observatory's knowledge of the operation of its sophisticated instrumentation and the organization of the data products. By the end of 2013, KOA will contain data from all eight active observatory instruments, with an anticipated volume of 28 TB. The data include raw science and observations, quick look products, weather information, and, for some instruments, reduced and calibrated products. The goal of including data from all instruments is the cumulation of a rapid expansion of the archive's holdings, and already data from four new instruments have been added since October 2012. One more active instrument, the integral field spectrograph OSIRIS, is scheduled for ingestion in December 2013. After preparation for ingestion into the archive, the data are transmitted electronically from WMKO to IPAC for curation in the physical archive. This process includes validation of the science and content of the data and verification that data were not corrupted in transmission. The archived data include both newly-acquired observations and all previously acquired observations. The older data extends back to the date of instrument commissioning; for some instruments, such as HIRES, these data can extend as far back as 1994. KOA will continue to ingest all newly obtained observations, at an anticipated volume of 4 TB per year, and plans to ingest data from two decommissioned instruments. Access to these data is governed by a data use policy that guarantees Principal Investigators (PI) exclusive access to their data for at least 18 months, and allows for extensions as granted by

  8. Using natural archives to track sources and long-term trends of pollution: an introduction

    USGS Publications Warehouse

    Jules Blais; Rosen, Michael R.; John Smol

    2015-01-01

    This book explores the myriad ways that environmental archives can be used to study the distribution and long-term trajectories of contaminants. The volume first focuses on reviews that examine the integrity of the historic record, including factors related to hydrology, post-depositional diffusion, and mixing processes. This is followed by a series of chapters dealing with the diverse archives available for long-term studies of environmental pollution.

  9. The archives of solar integral exposures and of spectroheliograms of the former Fraunhofer Institute (now: Kiepenheuer Institute for Solar Physics) in Freiburg and its partial dissolution (German Title: Die Archive solarer Integralaufnahmen und von Spektroheliogrammen des früheren Fraunhofer-Instituts (jetzt: Kiepenheuer-Institut für Sonnenphysik) in Freiburg und ihre teilweise Auflösung)

    NASA Astrophysics Data System (ADS)

    Wöhl, Hubertus

    The former Fraunhofer-Institut which was founded about 60 years ago and since 1978 is named Kiepenheuer-Institut für Sonnenphysik (KIS), has been for several decades a center to collect information about solar activity. One of the reasons for interest in solar activity were in the beginning attempts to forecast disturbances of (military) radio communication caused by solar eruptions. This would today be called ‘space weather research’. Later daily maps of the sun - showing its activity - were edited for many years. The data needed to describe solar activity were gained mainly from photographic images: Since 1939 white light images of the full solar disk were collected and stored in the archive. Since 1943 in addition spectroheliograms of the full solar disk in H-alpha and in Ca II K3 were collected. These images were taken on glass plates (some on film) of sizes 9 x 12 cm in the case of the white light images and of sizes 6 x 12 cm in the case of the spectroheliograms and were stored in envelopes with additional information written on them. Several hundred of these plates in their envelopes were combined in open wooden boxes each. These boxes were stored in open shelfs in a meeting room of the old solar observatory on the Schauinsland mountain near Freiburg. Within the last years it became obvious that the quality of many plates stored was bad and that the possible scientific usage was becoming very limited. In summer 2002 I started to investigate the quality of the white light images and prepared a data base in MS ACCESS XP about the plates I found which were observed at more than 10 different observing places. The total number of plates checked was 11782. Depending on the quality and possible later usage for an investigation of proper motions in sunspot groups I kept several series of them. Most of the plates kept stem from the years 1945 until 1949 and 1955 until 1959 - when the solar activity was extremely high. In total about 2000 plates were kept. It is

  10. The Planck Legacy Archive

    NASA Astrophysics Data System (ADS)

    Dupac, X.; Arviset, C.; Fernandez Barreiro, M.; Lopez-Caniego, M.; Tauber, J.

    2015-12-01

    The Planck Collaboration has released in 2015 their second major dataset through the Planck Legacy Archive (PLA). It includes cosmological, Extragalactic and Galactic science data in temperature (intensity) and polarization. Full-sky maps are provided with unprecedented angular resolution and sensitivity, together with a large number of ancillary maps, catalogues (generic, SZ clusters and Galactic cold clumps), time-ordered data and other information. The extensive cosmological likelihood package allows cosmologists to fully explore the plausible parameters of the Universe. A new web-based PLA user interface is made public since Dec. 2014, allowing easier and faster access to all Planck data, and replacing the previous Java-based software. Numerous additional improvements to the PLA are also being developed through the so-called PLA Added-Value Interface, making use of an external contract with the Planetek Hellas and Expert Analytics software companies. This will allow users to process time-ordered data into sky maps, separate astrophysical components in existing maps, simulate the microwave and infrared sky through the Planck Sky Model, and use a number of other functionalities.

  11. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  12. Benchmarking D and D procurement best practices at four commercial nuclear power plants.

    SciTech Connect

    Arflin, J.; Baker, G.; Bidwell, B.; Bugielski, D.; Cavanagh, J.; Sandlin, N.

    1999-05-11

    The Department of Energy (DOE) has as two of its strategic objectives to safely accomplish the world's largest environmental clean-up of contaminated sites and the adoption of the best management practices of the private sector to achieve business-like results efficiently and effectively. An integral part of the strategic response to the challenges facing the Department has been the use of benchmarking and best practice management to facilitate identifying and implementing leading-edge thinking, practices, approaches, and solutions.

  13. MESA: Mercator scheduler and archive system

    NASA Astrophysics Data System (ADS)

    Merges, Florian; Prins, Saskia; Pessemier, Wim; Raskin, Gert; Perez Padilla, Jesus; Van Winckel, Hans; Aerts, Conny

    2012-09-01

    We have developed an observing scheduling and archive system for the 1.2 meter Mercator Telescope. The goal was to optimize the specific niche of this modern small telescope in observational astrophysics: the building-up of long-term time series of photometric or high-resolution spectroscopic data with appropriate sampling for any given scientific program. This system allows PIs to easily submit their technical requirements and keep track of the progress of the observing programmes. The scheduling system provides the observer with an optimal schedule for the night which takes into account the current observing conditions as well as the priorities and requirements of the programmes in the queue. The observer can conveniently plan an observing night but also quickly adapt it to changing conditions. The archiving system automatically processes new files as they are created, including reduced data. It extracts the metadata and performs the normalization. A user can query, inspect and retrieve observing data. The progress of individual programmes, including timeline and reduced data plots can be seen at any time. Our MESA project is based on free and open source software (FOSS) using the Python programming language. The system is fully integrated with the Mercator Observing Control System1 (MOCS).

  14. Astronomical Surveys, Catalogs, Databases, and Archives

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.

    2016-06-01

    All-sky and large-area astronomical surveys and their cataloged data over the whole range of electromagnetic spectrum are reviewed, from γ-ray to radio, such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and II based catalogues (APM, MAPS, USNO, GSC) in optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio and many others, as well as most important surveys giving optical images (DSS I and II, SDSS, etc.), proper motions (Tycho, USNO, Gaia), variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS) and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA). Most important astronomical databases and archives are reviewed as well, including Wide-Field Plate DataBase (WFPDB), ESO, HEASARC, IRSA and MAST archives, CDS SIMBAD, VizieR and Aladin, NED and HyperLEDA extragalactic databases, ADS and astro-ph services. They are powerful sources for many-sided efficient research using Virtual Observatory tools. Using and analysis of Big Data accumulated in astronomy lead to many new discoveries.

  15. Robust retrieval from compressed medical image archives

    NASA Astrophysics Data System (ADS)

    Sidorov, Denis N.; Lerallut, Jean F.; Cocquerez, Jean-Pierre; Azpiroz, Joaquin

    2005-04-01

    Paper addresses the computational aspects of extracting important features directly from compressed images for the purpose of aiding biomedical image retrieval based on content. The proposed method for treatment of compressed medical archives follows the JPEG compression standard and exploits algorithm based on spacial analysis of the image cosine spectrum coefficients amplitude and location. The experiments on modality-specific archive of osteoarticular images show robustness of the method based on measured spectral spatial statistics. The features, which were based on the cosine spectrum coefficients' values, could satisfy different types of queries' modalities (MRI, US, etc), which emphasized texture and edge properties. In particular, it has been shown that there is wealth of information in the AC coefficients of the DCT transform, which can be utilized to support fast content-based image retrieval. The computational cost of proposed signature generation algorithm is low. Influence of conventional and the state-of-the-art compression techniques based on cosine and wavelet integral transforms on the performance of content-based medical image retrieval has been also studied. We found no significant differences in retrieval efficiencies for non-compressed and JPEG2000-compressed images even at the lowest bit rate tested.

  16. Planetary science data archiving in Europe

    NASA Astrophysics Data System (ADS)

    Heather, David

    2012-07-01

    Europe is currently enjoying a time of plenty in terms of planetary science missions and the resulting planetary data. The European Space Agency are flying or developing missions to many planetary bodies and are co-operating with other Agencies to ensure maximization of resources. Prior to the arrival of Mars Express at the Red Planet on 25th December 2003, Europe had very little experience in the development and management of planetary data. Since then, with the continuing MEX operations, the launch and successful operation of Venus Express, the ongoing Rosetta mission and its recent asteroid encounters, the SMART-1 technology tester mission to the Moon, the Huygens probe to Titan, and with contributing payload on ISRO's Chandrayaan-1 mission to the Moon, Europe has had a flood of data to deal with. We have had to learn fast! In addition to the basic challenges of managing and distributing such an influx of new data, there has been considerable effort in Europe to develop and manage the resources required to query and use them from within the community. The Integrated and Distributed Information Service (IDIS), part of the EU funded Europlanet activities, is a good example of this, aiming to centralize data sources and useful resources for scientists wishing to use the planetary data. Europe has been working very closely with international partners to globalize planetary data archiving standards, and all major planetary data providers and distributors in Europe are participating fully in the International Planetary Data Alliance (IPDA). A major focus of this work has been in the development of a protocol that will allow for the interoperability of archives and sharing of data across the globe. Close interactions are also ongoing with NASA's Planetary Data System as the standards used for planetary data archiving evolve. This talk will outline the planetary science data archiving situation in Europe, and summarize the various ongoing efforts to coordinate at an

  17. Data Management and Archiving - a Long Process

    NASA Astrophysics Data System (ADS)

    Gebauer, Petra; Bertelmann, Roland; Hasler, Tim; Kirchner, Ingo; Klump, Jens; Mettig, Nora; Peters-Kottig, Wolfgang; Rusch, Beate; Ulbricht, Damian

    2014-05-01

    Implementing policies for research data management to the end of data archiving at university institutions takes a long time. Even though, especially in geosciences, most of the scientists are familiar to analyze different sorts of data, to present statistical results and to write publications sometimes based on big data records, only some of them manage their data in a standardized manner. Much more often they have learned how to measure and to generate large volumes of data than to document these measurements and to preserve them for the future. Changing staff and limited funding make this work more difficult, but it is essential in a progressively developing digital and networked world. Results from the project EWIG (Translates to: Developing workflow components for long-term archiving of research data in geosciences), funded by Deutsche Forschungsgemeinschaft, will help on these theme. Together with the project partners Deutsches GeoForschungsZentrum Potsdam and Konrad-Zuse-Zentrum für Informationstechnik Berlin a workflow to transfer continuously recorded data from a meteorological city monitoring network into a long-term archive was developed. This workflow includes quality assurance of the data as well as description of metadata and using tools to prepare data packages for long term archiving. It will be an exemplary model for other institutions working with similar data. The development of this workflow is closely intertwined with the educational curriculum at the Institut für Meteorologie. Designing modules to run quality checks for meteorological time series of data measured every minute and preparing metadata are tasks in actual bachelor theses. Students will also test the usability of the generated working environment. Based on these experiences a practical guideline for integrating research data management in curricula will be one of the results of this project, for postgraduates as well as for younger students. Especially at the beginning of the

  18. Data Analysis in the LOFAR Long Term Archive

    NASA Astrophysics Data System (ADS)

    Holties, H. A.; van Diepen, G.; van Dok, D.; Dijkstra, F.; Loose, M.; Renting, G. A.; Schrijvers, C.; Vriend, W.-J.

    2012-09-01

    The LOFAR Long Term Archive (LTA) is a distributed information system that provides integrated services for data analysis as well as long term preservation of astronomical datasets and their provenance. The data analysis capabilities are provided by a federated system that integrates a central catalog and client user interfaces provided by Astro-Wise with processing pipelines running on Grid based and University HPC clusters. The framework used for data analysis ensures that proper authorization and access rules are applied and that generated data products are ingested into the storage part of the Long Term Archive. The ingest process includes information about data provenance. This paper presents the architecture of the processing framework of the LTA.

  19. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  20. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  1. COMBINE Archive Specification Version 1.

    PubMed

    Bergmann, Frank T; Rodriguez, Nicolas; Le Novère, Nicolas

    2015-01-01

    Several standard formats have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. The Open Modeling EXchange format (OMEX) supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, an optional metadata file, and the files describing the model. The manifest is an XML file listing all files included in the archive and their type. The metadata file provides additional information about the archive and its content. Although any format can be used, we recommend an XML serialization of the Resource Description Framework. Together with the other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. PMID:26528559

  2. Benchmarking and improving microbial-explicit soil biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bonan, G. B.; Hartman, M. D.; Sulman, B. N.; Wang, Y.

    2015-12-01

    Earth system models that are designed to project future carbon (C) cycle - climate feedbacks exhibit notably poor representation of soil biogeochemical processes and generate highly uncertain projections about the fate of the largest terrestrial C pool on Earth. Given these shortcomings there has been intense interest in soil biogeochemical model development, but parallel efforts to create the analytical tools to characterize, improve and benchmark these models have thus far lagged behind. A long-term goal of this work is to develop a framework to compare, evaluate and improve the process-level representation of soil biogeochemical models that could be applied in global land surface models. Here, we present a newly developed global model test bed that is built on the Carnegie Ames Stanford Approach model (CASA-CNP) that can rapidly integrate different soil biogeochemical models that are forced with consistent driver datasets. We focus on evaluation of two microbial explicit soil biogeochemical models that function at global scales: the MIcrobial-MIneral Carbon Stabilization model (MIMICS) and Carbon, Organisms, Rhizosphere, and Protection in the Soil Environment (CORPSE) model. Using the global model test bed coupled to MIMICS and CORPSE we quantify the uncertainty in potential C cycle - climate feedbacks that may be expected with these microbial explicit models, compared with a conventional first-order, linear model. By removing confounding variation of climate and vegetation drivers, our model test bed allows us to isolate key differences among different soil model structure and parameterizations that can be evaluated with further study. Specifically, the global test bed also identifies key parameters that can be estimated using cross-site observations. In global simulations model results are evaluated with steady state litter, microbial biomass, and soil C pools and benchmarked against independent globally gridded data products.

  3. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    SciTech Connect

    Cheng, J.J.; Faillace, E.R.; Gnanapragasam, E.K.

    1995-11-01

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization).

  4. Benchmarking kinetic calculations of resistive wall mode stability

    SciTech Connect

    Berkery, J. W.; Sabbagh, S. A.; Liu, Y. Q.; Betti, R.

    2014-05-15

    Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport (PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].

  5. Benchmarking NSP Reactors with CORETRAN-01

    SciTech Connect

    Hines, Donald D.; Grow, Rodney L.; Agee, Lance J

    2004-10-15

    As part of an overall verification and validation effort, the Electric Power Research Institute's (EPRIs) CORETRAN-01 has been benchmarked against Northern States Power's Prairie Island and Monticello reactors through 12 cycles of operation. The two Prairie Island reactors are Westinghouse 2-loop units with 121 asymmetric 14 x 14 lattice assemblies utilizing up to 8 wt% gadolinium while Monticello is a General Electric 484 bundle boiling water reactor. All reactor cases were executed in full core utilizing 24 axial nodes per assembly in the fuel with 1 additional reflector node above, below, and around the perimeter of the core. Cross-section sets used in this benchmark effort were generated by EPRI's CPM-3 as well as Studsvik's CASMO-3 and CASMO-4 to allow for separation of the lattice calculation effect from the nodal simulation method. These cases exercised the depletion-shuffle-depletion sequence through four cycles for each unit using plant data to follow actual operations. Flux map calculations were performed for comparison to corresponding measurement statepoints. Additionally, start-up physics testing cases were used to predict cycle physics parameters for comparison to existing plant methods and measurements.These benchmark results agreed well with both current analysis methods and plant measurements, indicating that CORETRAN-01 may be appropriate for steady-state physics calculations of both the Prairie Island and Monticello reactors. However, only the Prairie Island results are discussed in this paper since Monticello results were of similar quality and agreement. No attempt was made in this work to investigate CORETRAN-01 kinetics capability by analyzing plant transients, but these steady-state results form a good foundation for moving in that direction.

  6. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  7. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  8. Multilevel Digital Archives: Strategy and Experience

    NASA Astrophysics Data System (ADS)

    Rubanov, L. I.; Merzlyakov, N. S.; Karnaukhov, V. N.

    The paper describes a methodology we use to translate an existing conventional archive into a digital one. The method does well for large archives comprising documents with essential graphic constituent (handwritten texts, photographs, drawings, etc.). Main structural components of our digital archive are relational database and image bank which are physically separated but logically linked together. The components make up three-level distributed structure consisting of primary archive, its regional replicas, and various secondary archives (among them subsets presented in the Web and collections of compact discs). Only authorized user are allowed to access two upper levels, and the bottom level is open for free public access. A secondary archive is created and updated automatically without special development. Such construction allows us to combine reliable storage, easy access and protection of intellectual property. The paper also presents several digital archives already implemented in the Archive of the Russian Academy of Sciences.

  9. 50 CFR 635.33 - Archival tags.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Implantation report. Any person affixing or implanting an archival tag into a regulated species must obtain... catch, possess, retain, and land an Atlantic HMS in which an archival tag has been implanted or...

  10. 50 CFR 635.33 - Archival tags.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Implantation report. Any person affixing or implanting an archival tag into a regulated species must obtain... catch, possess, retain, and land an Atlantic HMS in which an archival tag has been implanted or...

  11. Benchmark simulations of ICRF antenna coupling

    NASA Astrophysics Data System (ADS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Van Compernolle, B.; Milanesio, D.; Maggiora, R.

    2007-09-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved.

  12. Benchmark simulations of ICRF antenna coupling

    SciTech Connect

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-09-28

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved.

  13. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  14. Benchmarking East Tennessee`s economic capacity

    SciTech Connect

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  15. Benchmark Problems for Space Mission Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard

    2003-01-01

    To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.

  16. The HIPPO Project Archive: Carbon Cycle and Greenhouse Gas Data

    NASA Astrophysics Data System (ADS)

    Christensen, S. W.; Aquino, J.; Hook, L.; Williams, S. F.

    2012-12-01

    The HIAPER (NSF/NCAR Gulfstream V Aircraft) Pole-to-Pole Observations (HIPPO) project measured a comprehensive suite of atmospheric trace gases and aerosols pertinent to understanding the global carbon cycle from the surface to the tropopause and approximately pole-to-pole over the Pacific Ocean. Flights took place over five missions during different seasons from 2009 to 2011. Data and documentation are available to the public from two archives: (1) NCAR's Earth Observing Laboratory (EOL) provides complete aircraft and flight operational data, and (2) the U.S. DOE's Carbon Dioxide Information Analysis Center (CDIAC) provides integrated measurement data products. The integrated products are more generally useful for secondary analyses. Data processing is nearing completion, although improvements to the data will continue to evolve and analyses will continue many years into the future. Periodic new releases of integrated measurement (merged) products will be generated by EOL when individual measurement data have been updated as directed by the Lead Principal Investigator. The EOL and CDIAC archives will share documentation and supplemental links and will ensure that the latest versions of data products are available to users of both archives. The EOL archive (http://www.eol.ucar.edu/projects/hippo/) provides the underlying investigator-provided data, including supporting data sets (e.g. operational satellite, model output, global observations, etc.), and ancillary flight operational information including field catalogs, data quality reports, software, documentation, publications, photos/imagery, and other detailed information about the HIPPO missions. The CDIAC archive provides integrated measurement data products, user documentation, and metadata through the HIPPO website (http://hippo.ornl.gov). These merged products were derived by consistently combining the aircraft state parameters for position, time, temperature, pressure, and wind speed with meteorological

  17. Physical Review Online Archives (PROLA)

    SciTech Connect

    Thomas, T.; Davies, J.; Kilman, D.; Laroche, F.

    1997-05-01

    In cooperation with the American Physical Society, the Computer Research and Applications Group (CIC-3 -- see Section 13 for an acronym glossary) at Los Alamos National Laboratory has developed and deployed a journal archive system called, The Physical Review OnLine Archive (PROLA). It is intended to be a complete, full service on-line archive of the existing issues of the journal Physical Review from its inception to the advent of a full-service electronic version. The fundamental goals of PROLA are to provide screen-viewable and printable images of every article, full-text and fielded search capability, good browsing features, direct article retrieval tools, and hyperlinking to all references, errata, and comments. The research focus is on transitioning large volumes of paper journals to a modern electronic environment.

  18. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  19. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  20. Using the Canadian Language Benchmarks (CLB) to Benchmark College Programs/Courses and Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Epp, Lucy; Stawychny, Mary

    2001-01-01

    Describes a process developed by the Language Training Centre at Red River College (RRC) to use the Canadian language benchmarks in analyzing the language levels used in programs and courses at RRC to identify appropriate entry-level language proficiency and the levels that second language students need in order to meet college or university…

  1. Cataloging Sound Recordings Using Archival Methods.

    ERIC Educational Resources Information Center

    Thomas, David H.

    1990-01-01

    Discusses the processing and cataloging of archival sound recording collections. The use of "Anglo American Cataloguing Rules, 2nd edition" (AACR 2) and "Archives, Personal Papers and Manuscripts" (APPM) is explained, the MARC format for Archival and Manuscripts Control (AMC) is described, finding aids and subject indexing are discussed, and…

  2. A Background to Motion Picture Archives.

    ERIC Educational Resources Information Center

    Fletcher, James E.; Bolen, Donald L., Jr.

    The emphasis of archives is on the maintenance and preservation of materials for scholarly research and professional reference. Archives may be established as separate entities or as part of a library or museum. Film archives may include camera originals (positive and negative), sound recordings, outtakes, scripts, contracts, advertising…

  3. Digital Archives: Democratizing the Doing of History

    ERIC Educational Resources Information Center

    Bolick, Cheryl Mason

    2006-01-01

    The creation of digital archives has shifted the dynamics of doing historical research by changing who is able to conduct the research and how historical research is done. Digital archives are collections of numerical data, texts, images, maps, videos, and audio files that are available through the Internet. The majority of digital archives are…

  4. 36 CFR 1275.24 - Archival processing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Archival processing. 1275.24... THE NIXON ADMINISTRATION Preservation and Protection § 1275.24 Archival processing. When authorized by the Archivist and until the commencement of archival processing in accordance with subpart D of...

  5. 50 CFR 635.33 - Archival tags.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Archival tags. 635.33 Section 635.33..., DEPARTMENT OF COMMERCE ATLANTIC HIGHLY MIGRATORY SPECIES Management Measures § 635.33 Archival tags. (a) Implantation report. Any person affixing or implanting an archival tag into a regulated species must...

  6. Encoded Archival Description as a Halfway Technology

    ERIC Educational Resources Information Center

    Dow, Elizabeth H.

    2009-01-01

    In the mid 1990s, Encoded Archival Description (EAD) appeared as a revolutionary technology for publishing archival finding aids on the Web. The author explores whether or not, given the advent of Web 2.0, the archival community should abandon EAD and look for something to replace it. (Contains 18 notes.)

  7. Standing on the Record: The National Archives.

    ERIC Educational Resources Information Center

    Crowley, Carolyn

    1984-01-01

    Profiles National Archives established by Congress in 1934 to collect and organize federal documents of permanent historic value. Types of materials included in archives, the Washington building and regional centers, the "Declaration of Independence," arrangement and preservation of materials, declassification of documents, and archives and…

  8. Reengineering Archival Access through the OAI Protocols.

    ERIC Educational Resources Information Center

    Prom, Christopher J.

    2003-01-01

    The Open Archives Initiative (OAI) Protocol for Metadata Harvesting program presents a method by which metadata regarding archives and manuscripts can be shared and made more interoperable with metadata from other sources. Outlines a method for exposing hierarchical metadata from encoded archival description (EAD) files and assesses some…

  9. 36 CFR 1253.7 - Regional Archives.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Regional Archives. 1253.7 Section 1253.7 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION PUBLIC AVAILABILITY AND USE LOCATION OF NARA FACILITIES AND HOURS OF USE § 1253.7 Regional Archives. Hours...

  10. A Generic Archive Protocol and an Implementation

    NASA Astrophysics Data System (ADS)

    Jordan, J. M.; Jennings, D. G.; McGlynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.

    1993-01-01

    Archiving vast amounts of data has become a major part of every scientific space mission today. GRASP, the Generic Retrieval/Ar\\-chive Services Protocol, addresses the question of how to archive the data collected in an environment where the underlying hardware archives and computer hosts may be rapidly changing.

  11. 22 CFR 171.6 - Archival records.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Archival records. 171.6 Section 171.6 Foreign Relations DEPARTMENT OF STATE ACCESS TO INFORMATION AVAILABILITY OF INFORMATION AND RECORDS TO THE PUBLIC General Policy and Procedures § 171.6 Archival records. The Department ordinarily transfers records to the National Archives when they are...

  12. The Preservation of Paper Collections in Archives.

    ERIC Educational Resources Information Center

    Adams, Cynthia Ann

    The preservation methods used for paper collections in archives were studied through a survey of archives in the metropolitan Atlanta (Georgia) area. The preservation policy or program was studied, and the implications for conservators and preservation officers were noted. Twelve of 15 archives responded (response rate of 80 percent). Basic…

  13. Sequence of events from the onset to the demise of the Last Interglacial: Evaluating strengths and limitations of chronologies used in climatic archives

    NASA Astrophysics Data System (ADS)

    Govin, A.; Capron, E.; Tzedakis, P. C.; Verheyden, S.; Ghaleb, B.; Hillaire-Marcel, C.; St-Onge, G.; Stoner, J. S.; Bassinot, F.; Bazin, L.; Blunier, T.; Combourieu-Nebout, N.; El Ouahabi, A.; Genty, D.; Gersonde, R.; Jimenez-Amat, P.; Landais, A.; Martrat, B.; Masson-Delmotte, V.; Parrenin, F.; Seidenkrantz, M.-S.; Veres, D.; Waelbroeck, C.; Zahn, R.

    2015-12-01

    The Last Interglacial (LIG) represents an invaluable case study to investigate the response of components of the Earth system to global warming. However, the scarcity of absolute age constraints in most archives leads to extensive use of various stratigraphic alignments to different reference chronologies. This feature sets limitations to the accuracy of the stratigraphic assignment of the climatic sequence of events across the globe during the LIG. Here, we review the strengths and limitations of the methods that are commonly used to date or develop chronologies in various climatic archives for the time span (˜140-100 ka) encompassing the penultimate deglaciation, the LIG and the glacial inception. Climatic hypotheses underlying record alignment strategies and the interpretation of tracers are explicitly described. Quantitative estimates of the associated absolute and relative age uncertainties are provided. Recommendations are subsequently formulated on how best to define absolute and relative chronologies. Future climato-stratigraphic alignments should provide (1) a clear statement of climate hypotheses involved, (2) a detailed understanding of environmental parameters controlling selected tracers and (3) a careful evaluation of the synchronicity of aligned paleoclimatic records. We underscore the need to (1) systematically report quantitative estimates of relative and absolute age uncertainties, (2) assess the coherence of chronologies when comparing different records, and (3) integrate these uncertainties in paleoclimatic interpretations and comparisons with climate simulations. Finally, we provide a sequence of major climatic events with associated age uncertainties for the period 140-105 ka, which should serve as a new benchmark to disentangle mechanisms of the Earth system's response to orbital forcing and evaluate transient climate simulations.

  14. The NAS Parallel Benchmarks 2.1 Results

    NASA Technical Reports Server (NTRS)

    Saphir, William; Woo, Alex; Yarrow, Maurice

    1996-01-01

    We present performance results for version 2.1 of the NAS Parallel Benchmarks (NPB) on the following architectures: IBM SP2/66 MHz; SGI Power Challenge Array/90 MHz; Cray Research T3D; and Intel Paragon. The NAS Parallel Benchmarks are a widely-recognized suite of benchmarks originally designed to compare the performance of highly parallel computers with that of traditional supercomputers.

  15. Archiving Space Geodesy Data for 20+ Years at the CDDIS

    NASA Technical Reports Server (NTRS)

    Noll, Carey E.; Dube, M. P.

    2004-01-01

    Since 1982, the Crustal Dynamics Data Information System (CDDIS) has supported the archive and distribution of geodetic data products acquired by NASA programs. These data include GPS (Global Positioning System), GLONASS (GLObal NAvigation Satellite System), SLR (Satellite Laser Ranging), VLBI (Very Long Baseline Interferometry), and DORIS (Doppler Orbitography and Radiolocation Integrated by Satellite). The data archive supports NASA's space geodesy activities through the Solid Earth and Natural Hazards (SENH) program. The CDDIS data system and its archive have become increasingly important to many national and international programs, particularly several of the operational services within the International Association of Geodesy (IAG), including the International GPS Service (IGS), the International Laser Ranging Service (ILRS), the International VLBI Service for Geodesy and Astrometry (IVS), the International DORIS Service (IDS), and the International Earth Rotation Service (IERS). The CDDIS provides easy and ready access to a variety of data sets, products, and information about these data. The specialized nature of the CDDIS lends itself well to enhancement and thus can accommodate diverse data sets and user requirements. All data sets and metadata extracted from these data sets are accessible to scientists through ftp and the web; general information about each data set is accessible via the web. The CDDIS, including background information about the system and its user communities, the computer architecture, archive contents, available metadata, and future plans will be discussed.

  16. Experiences and challenges running CERN's high capacity tape archive

    NASA Astrophysics Data System (ADS)

    Cancio, Germán; Bahyl, Vladimír; Kruse, Daniele Francesco; Leduc, Julien; Cano, Eric; Murray, Steven

    2015-12-01

    CERN's tape-based archive system has collected over 70 Petabytes of data during the first run of the LHC. The Long Shutdown is being used for migrating the complete 100 Petabytes data archive to higher-density tape media. During LHC Run 2, the archive will have to cope with yearly growth rates of up to 40-50 Petabytes. In this contribution, we describe the scalable setup for coping with the storage and long-term archival of such massive data amounts. We also review the challenges resulting and mechanisms devised for measuring and enhancing availability and reliability, as well as ensuring the long-term integrity and bit-level preservation of the complete data repository. The procedures and tools for the proactive and efficient operation of the tape infrastructure are described, including the features developed for automated problem detection, identification and notification. Finally, we present an outlook in terms of future capacity requirements growth and how it matches the expected tape technology evolution.

  17. Project Gemini online digital archive

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2012-01-01

    An archive containing the first high-resolution digital scans of the original flight films from Project Gemini, the second U.S. human spaceflight program, was unveiled by the NASA Johnson Space Center and Arizona State University's (ASU) School of Earth and Space Exploration on 6 January. The archive includes images from 10 flights. Project Gemini, which ran from 1964 to 1966, followed Project Mercury and preceded the Apollo spacecraft. Mercury and Apollo imagery are also available through ASU. For more information, see http://tothemoon.ser.asu.edu/gallery/gemini and http://apollo.sese.asu.edu/index.html.

  18. Long-term data archiving

    SciTech Connect

    Moore, David Steven

    2009-01-01

    Long term data archiving has much value for chemists, not only to retain access to research and product development records, but also to enable new developments and new discoveries. There are some recent regulatory requirements (e.g., FDA 21 CFR Part 11), but good science and good business both benefit regardless. A particular example of the benefits of and need for long term data archiving is the management of data from spectroscopic laboratory instruments. The sheer amount of spectroscopic data is increasing at a scary rate, and the pressures to archive come from the expense to create the data (or recreate it if it is lost) as well as its high information content. The goal of long-term data archiving is to save and organize instrument data files as well as any needed meta data (such as sample ID, LIMS information, operator, date, time, instrument conditions, sample type, excitation details, environmental parameters, etc.). This editorial explores the issues involved in long-term data archiving using the example of Raman spectral databases. There are at present several such databases, including common data format libraries and proprietary libraries. However, such databases and libraries should ultimately satisfy stringent criteria for long term data archiving, including readability for long times into the future, robustness to changes in computer hardware and operating systems, and use of public domain data formats. The latter criterion implies the data format should be platform independent and the tools to create the data format should be easily and publicly obtainable or developable. Several examples of attempts at spectral libraries exist, such as the ASTM ANDI format, and the JCAMP-DX format. On the other hand, proprietary library spectra can be exchanged and manipulated using proprietary tools. As the above examples have deficiencies according to the three long term data archiving criteria, Extensible Markup Language (XML; a product of the World Wide Web

  19. Experimentally Relevant Benchmarks for Gyrokinetic Codes

    NASA Astrophysics Data System (ADS)

    Bravenec, Ronald

    2010-11-01

    Although benchmarking of gyrokinetic codes has been performed in the past, e.g., The Numerical Tokamak, The Cyclone Project, The Plasma Microturbulence Project, and various informal activities, these efforts have typically employed simple plasma models. For example, the Cyclone ``base case'' assumed shifted-circle flux surfaces, no magnetic transport, adiabatic electrons, no collisions nor impurities, ρi << a (ρi the ion gyroradius and a the minor radius), and no ExB flow shear. This work presents comparisons of linear frequencies and nonlinear fluxes from GYRO and GS2 with none of the above approximations except ρi << a and no ExB flow shear. The comparisons are performed at two radii of a DIII-D plasma, one in the confinement region (r/a = 0.5) and the other closer to the edge (r/a = 0.7). Many of the plasma parameters differ by a factor of two between these two locations. Good agreement between GYRO and GS2 is found when neglecting collisions. However, differences are found when including e-i collisions (Lorentz model). The sources of the discrepancy are unknown as of yet. Nevertheless, two collisionless benchmarks have been formulated with considerably different plasma parameters. Acknowledgements to J. Candy, E. Belli, and M. Barnes.

  20. REVISED STREAM CODE AND WASP5 BENCHMARK

    SciTech Connect

    Chen, K

    2005-05-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-}20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-}3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

  1. Direct data access protocols benchmarking on DPM

    NASA Astrophysics Data System (ADS)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  2. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  3. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  4. Simple mathematical law benchmarks human confrontations

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  5. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  6. Revised STREAM code and WASP5 benchmark

    SciTech Connect

    Chen, K.F.

    1995-05-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-} 20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-} 3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

  7. Benchmarking longwave multiple scattering in cirrus environments

    NASA Astrophysics Data System (ADS)

    Kuo, C.; Feldman, D.; Yang, P.; Flanner, M.; Huang, X.

    2015-12-01

    Many global climate models currently assume that longwave photons are non-scattering in clouds, and also have overly simplistic treatments of surface emissivity. Multiple scattering of longwave radiation and non-unit emissivity could lead to substantial discrepancies between the actual Earth's radiation budget and its parameterized representation in the infrared, especially at wavelengths longer than 15 µm. The evaluation of the parameterization of longwave spectral multiple scattering in radiative transfer codes for global climate models is critical and will require benchmarking across a wide range atmospheric conditions with more accurate, though computationally more expensive, multiple scattering models. We therefore present a line-by-line radiative transfer solver that includes scattering, run on a supercomputer from the National Energy Research Scientific Computing that exploits the embarrassingly parallel nature of 1-D radiative transfer solutions with high effective throughput. When paired with an advanced ice-particle optical property database with spectral values ranging from the 0.2 to 100 μm, a particle size and habit distribution derived from MODIS Collection 6, and a database for surface emissivity which extends to 100 μm, this benchmarking result can densely sample the thermodynamic and condensate parameter-space, and therefore accelerate the development of an advanced infrared radiative parameterization for climate models, which could help disentangle forcings and feedbacks in CMIP6.

  8. The Medical Library Association Benchmarking Network: results*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  9. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. PMID:25560631

  10. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. PMID:26249177

  11. Simple mathematical law benchmarks human confrontations

    PubMed Central

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  12. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  13. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  14. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  15. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  16. A Suite of Criticality Benchmarks for Validating Nuclear Data

    SciTech Connect

    Stephanie C. Frankle

    1999-04-01

    The continuous-energy neutron data library ENDF60 for use with MCNP{trademark} was released in the fall of 1994, and was based on ENDF/B-Vl evaluations through Release 2. As part of the data validation process for this library, a number of criticality benchmark calculations were performed. The original suite of nine criticality benchmarks used to test ENDF60 has now been expanded to 86 benchmarks. This report documents the specifications for the suite of 86 criticality benchmarks that have been developed for validating nuclear data.

  17. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  18. Encoded Archival Context (EAC) and Archival Description: Rationale and Background

    ERIC Educational Resources Information Center

    Szary, Richard V.

    2005-01-01

    The use of contextual information about the creators and users of archival and manuscript resources has always been a critical method for discovering and providing access to them. Traditionally, this information has been unstructured and ephemeral, being part of the knowledge that experienced staff bring to reference queries. The development of…

  19. AAGRUUK: the Arctic Archive for Geophysical Research

    NASA Astrophysics Data System (ADS)

    Johnson, P. D.; Edwards, M. H.; Wright, D.; Dailey, M.

    2005-12-01

    The key to developing and maintaining a unified community database lies in building and supporting a general organizational structure linking distributed databases through the worldwide web via a portal that contains key information, links, and search engines, is maintained and updated by people familiar with the data sets, and contains sufficient information to be useful across many disciplines encompassed by research scientists. There must also be enough flexibility in the approach to support two disparate types of principal investigators who wish to contribute data: those who desire or require relinquishing their data to a repository managed by others and those who wish to maintain control of their data and online archives. To provide this flexibility and accommodate the diversity, volume, and complexity of multidisciplinary geological and geophysical data for the Arctic Ocean we are developing AAGRUUK, an online data repository combined with a web-based archive-linking infrastructure to produce a distributed Data Management System. The overarching goal of AAGRUUK is to promote collaborative research and multidisciplinary studies and foster new scientific insights for the Arctic Basin. To date the archive includes bathymetry, sidescan and subbottom data collected by the nuclear-powered submarines during the Science Ice Exercises (SCICEX), multibeam bathymetry collected by the USCGC HEALY and the Nathaniel B. Palmer, plus near-shore data around Barrow, Alaska as well as ice camp T3 and nuclear submarine soundings. Integration of the various bathymetric datasets has illustrated a number of problems, some of which aren't readily apparent until multiple overlapping datasets have been combined. Foremost among these are sounding errors caused by mapping while breaking ice and navigational misalignments in the SCICEX data. The former error is apparent in swath data that follow an irregular navigational track, indicating that a ship was unable to proceed directly from

  20. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    SciTech Connect

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.