Sample records for windows benchmark study

  1. Dynamic vehicle routing with time windows in theory and practice.

    PubMed

    Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael

    2017-01-01

    The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.

  2. Performance Evaluation and Improvement of Ferroelectric Field-Effect Transistor Memory

    NASA Astrophysics Data System (ADS)

    Yu, Hyung Suk

    Flash memory is reaching scaling limitations rapidly due to reduction of charge in floating gates, charge leakage and capacitive coupling between cells which cause threshold voltage fluctuations, short retention times, and interference. Many new memory technologies are being considered as alternatives to flash memory in an effort to overcome these limitations. Ferroelectric Field-Effect Transistor (FeFET) is one of the main emerging candidates because of its structural similarity to conventional FETs and fast switching speed. Nevertheless, the performance of FeFETs have not been systematically compared and analyzed against other competing technologies. In this work, we first benchmark the intrinsic performance of FeFETs and other memories by simulations in order to identify the strengths and weaknesses of FeFETs. To simulate realistic memory applications, we compare memories on an array structure. For the comparisons, we construct an accurate delay model and verify it by benchmarking against exact HSPICE simulations. Second, we propose an accurate model for FeFET memory window since the existing model has limitations. The existing model assumes symmetric operation voltages but it is not valid for the practical asymmetric operation voltages. In this modeling, we consider practical operation voltages and device dimensions. Also, we investigate realistic changes of memory window over time and retention time of FeFETs. Last, to improve memory window and subthreshold swing, we suggest nonplanar junctionless structures for FeFETs. Using the suggested structures, we study the dimensional dependences of crucial parameters like memory window and subthreshold swing and also analyze key interference mechanisms.

  3. Benchmarks for Enhanced Network Performance: Hands-On Testing of Operating System Solutions to Identify the Optimal Application Server Platform for the Graduate School of Business and Public Policy

    DTIC Science & Technology

    2010-09-01

    for Applied Mathematics. Kennedy, R. C. (2009a). Clocking Windows netbook performance. Retrieved on 08/14/2010, from http...podcasts.infoworld.com/d/hardware/clocking-windows- netbook -performance-883?_kip_ipx=1177119066-1281460794 Kennedy, R. C. (2009b). OfficeBench 7: A cool new way to

  4. Aluminum-Mediated Formation of Cyclic Carbonates: Benchmarking Catalytic Performance Metrics.

    PubMed

    Rintjema, Jeroen; Kleij, Arjan W

    2017-03-22

    We report a comparative study on the activity of a series of fifteen binary catalysts derived from various reported aluminum-based complexes. A benchmarking of their initial rates in the coupling of various terminal and internal epoxides in the presence of three different nucleophilic additives was carried out, providing for the first time a useful comparison of activity metrics in the area of cyclic organic carbonate formation. These investigations provide a useful framework for how to realistically valorize relative reactivities and which features are important when considering the ideal operational window of each binary catalyst system. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A new symmetrical quasi-classical model for electronically non-adiabatic processes: Application to the case of weak non-adiabatic coupling

    DOE PAGES

    Cotton, Stephen J.; Miller, William H.

    2016-10-14

    Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less

  6. A new symmetrical quasi-classical model for electronically non-adiabatic processes: Application to the case of weak non-adiabatic coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotton, Stephen J.; Miller, William H.

    Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less

  7. Efficient constraint handling in electromagnetism-like algorithm for traveling salesman problem with time windows.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  8. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  9. AN ASSESSMENT OF MCNP WEIGHT WINDOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. HENDRICKS; C. N. CULBERTSON

    2000-01-01

    The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less

  10. Simulated annealing with restart strategy for the blood pickup routing problem

    NASA Astrophysics Data System (ADS)

    Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.

    2018-04-01

    This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.

  11. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  12. Performance Evaluation of Synthetic Benchmarks and Image Processing (IP) Kernels on Intel and PowerPC Processors

    DTIC Science & Technology

    2013-08-01

    2006 Linux Q1 2005 Pentium D (830) 3 2/2 2511 1148 3617 Windows Vista Q2 2005 Pentium D (830) 3 2/2 2938 1155 3556 Windows XP Q2 2005 PowerPC 970MP 2...1 3734 3439 1304 Cell Broadband Engine 3.2 1/1 0.207 2006 239 441 Pentium D (830) 3 2/2 2 3617 2511 1148 Pentium D (830) 3 2/2 2 3556 2938 1155

  13. Career performance trajectories of Olympic swimmers: benchmarks for talent development.

    PubMed

    Allen, Sian V; Vandenbogaerde, Tom J; Hopkins, William G

    2014-01-01

    The age-related progression of elite athletes to their career-best performances can provide benchmarks for talent development. The purpose of this study was to model career performance trajectories of Olympic swimmers to develop these benchmarks. We searched the Web for annual best times of swimmers who were top 16 in pool events at the 2008 or 2012 Olympics, from each swimmer's earliest available competitive performance through to 2012. There were 6959 times in the 13 events for each sex, for 683 swimmers, with 10 ± 3 performances per swimmer (mean ± s). Progression to peak performance was tracked with individual quadratic trajectories derived using a mixed linear model that included adjustments for better performance in Olympic years and for the use of full-body polyurethane swimsuits in 2009. Analysis of residuals revealed appropriate fit of quadratic trends to the data. The trajectories provided estimates of age of peak performance and the duration of the age window of trivial improvement and decline around the peak. Men achieved peak performance later than women (24.2 ± 2.1 vs. 22.5 ± 2.4 years), while peak performance occurred at later ages for the shorter distances for both sexes (∼1.5-2.0 years between sprint and distance-event groups). Men and women had a similar duration in the peak-performance window (2.6 ± 1.5 years) and similar progressions to peak performance over four years (2.4 ± 1.2%) and eight years (9.5 ± 4.8%). These data provide performance targets for swimmers aiming to achieve elite-level performance.

  14. CatReg Software for Categorical Regression Analysis (May 2016)

    EPA Science Inventory

    CatReg 3.0 is a Microsoft Windows enhanced version of the Agency’s categorical regression analysis (CatReg) program. CatReg complements EPA’s existing Benchmark Dose Software (BMDS) by greatly enhancing a risk assessor’s ability to determine whether data from separate toxicologic...

  15. Windowed multipole for cross section Doppler broadening

    NASA Astrophysics Data System (ADS)

    Josey, C.; Ducru, P.; Forget, B.; Smith, K.

    2016-02-01

    This paper presents an in-depth analysis on the accuracy and performance of the windowed multipole Doppler broadening method. The basic theory behind cross section data is described, along with the basic multipole formalism followed by the approximations leading to windowed multipole method and the algorithm used to efficiently evaluate Doppler broadened cross sections. The method is tested by simulating the BEAVRS benchmark with a windowed multipole library composed of 70 nuclides. Accuracy of the method is demonstrated on a single assembly case where total neutron production rates and 238U capture rates compare within 0.1% to ACE format files at the same temperature. With regards to performance, clock cycle counts and cache misses were measured for single temperature ACE table lookup and for windowed multipole. The windowed multipole method was found to require 39.6% more clock cycles to evaluate, translating to a 7.9% performance loss overall. However, the algorithm has significantly better last-level cache performance, with 3 fewer misses per evaluation, or a 65% reduction in last-level misses. This is due to the small memory footprint of the windowed multipole method and better memory access pattern of the algorithm.

  16. Buyers Guide: Communications Software--Overview; Ratings Digest; Reviews; Benchmarks.

    ERIC Educational Resources Information Center

    Lockwood, Russ; And Others

    1988-01-01

    Contains articles which review communications software. Includes "Crosstalk Mark 4,""ProComm,""Freeway Advanced,""Windows InTalk,""Relay Silver," and "Smartcom III." Compares in terms of text proprietary, MCI upload, Test ASCII, Spreadsheet Proprietary, Text XMODEM, Spreadsheet XMODEM, MCI Download, Documentation, Support and Service, ease of use,…

  17. Improved artificial bee colony algorithm for vehicle routing problem with time windows

    PubMed Central

    Yan, Qianqian; Zhang, Mengjie; Yang, Yunong

    2017-01-01

    This paper investigates a well-known complex combinatorial problem known as the vehicle routing problem with time windows (VRPTW). Unlike the standard vehicle routing problem, each customer in the VRPTW is served within a given time constraint. This paper solves the VRPTW using an improved artificial bee colony (IABC) algorithm. The performance of this algorithm is improved by a local optimization based on a crossover operation and a scanning strategy. Finally, the effectiveness of the IABC is evaluated on some well-known benchmarks. The results demonstrate the power of IABC algorithm in solving the VRPTW. PMID:28961252

  18. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    PubMed

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Effect of the time window on the heat-conduction information filtering model

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Song, Wen-Jun; Hou, Lei; Zhang, Yi-Lu; Liu, Jian-Guo

    2014-05-01

    Recommendation systems have been proposed to filter out the potential tastes and preferences of the normal users online, however, the physics of the time window effect on the performance is missing, which is critical for saving the memory and decreasing the computation complexity. In this paper, by gradually expanding the time window, we investigate the impact of the time window on the heat-conduction information filtering model with ten similarity measures. The experimental results on the benchmark dataset Netflix indicate that by only using approximately 11.11% recent rating records, the accuracy could be improved by an average of 33.16% and the diversity could be improved by 30.62%. In addition, the recommendation performance on the dataset MovieLens could be preserved by only considering approximately 10.91% recent records. Under the circumstance of improving the recommendation performance, our discoveries possess significant practical value by largely reducing the computational time and shortening the data storage space.

  20. Automated ancillary cancer history classification for mesothelioma patients from free-text clinical reports

    PubMed Central

    Wilson, Richard A.; Chapman, Wendy W.; DeFries, Shawn J.; Becich, Michael J.; Chapman, Brian E.

    2010-01-01

    Background: Clinical records are often unstructured, free-text documents that create information extraction challenges and costs. Healthcare delivery and research organizations, such as the National Mesothelioma Virtual Bank, require the aggregation of both structured and unstructured data types. Natural language processing offers techniques for automatically extracting information from unstructured, free-text documents. Methods: Five hundred and eight history and physical reports from mesothelioma patients were split into development (208) and test sets (300). A reference standard was developed and each report was annotated by experts with regard to the patient’s personal history of ancillary cancer and family history of any cancer. The Hx application was developed to process reports, extract relevant features, perform reference resolution and classify them with regard to cancer history. Two methods, Dynamic-Window and ConText, for extracting information were evaluated. Hx’s classification responses using each of the two methods were measured against the reference standard. The average Cohen’s weighted kappa served as the human benchmark in evaluating the system. Results: Hx had a high overall accuracy, with each method, scoring 96.2%. F-measures using the Dynamic-Window and ConText methods were 91.8% and 91.6%, which were comparable to the human benchmark of 92.8%. For the personal history classification, Dynamic-Window scored highest with 89.2% and for the family history classification, ConText scored highest with 97.6%, in which both methods were comparable to the human benchmark of 88.3% and 97.2%, respectively. Conclusion: We evaluated an automated application’s performance in classifying a mesothelioma patient’s personal and family history of cancer from clinical reports. To do so, the Hx application must process reports, identify cancer concepts, distinguish the known mesothelioma from ancillary cancers, recognize negation, perform reference resolution and determine the experiencer. Results indicated that both information extraction methods tested were dependant on the domain-specific lexicon and negation extraction. We showed that the more general method, ConText, performed as well as our task-specific method. Although Dynamic- Window could be modified to retrieve other concepts, ConText is more robust and performs better on inconclusive concepts. Hx could greatly improve and expedite the process of extracting data from free-text, clinical records for a variety of research or healthcare delivery organizations. PMID:21031012

  1. Resource-constrained scheduling with hard due windows and rejection penalties

    NASA Astrophysics Data System (ADS)

    Garcia, Christopher

    2016-09-01

    This work studies a scheduling problem where each job must be either accepted and scheduled to complete within its specified due window, or rejected altogether. Each job has a certain processing time and contributes a certain profit if accepted or penalty cost if rejected. There is a set of renewable resources, and no resource limit can be exceeded at any time. Each job requires a certain amount of each resource when processed, and the objective is to maximize total profit. A mixed-integer programming formulation and three approximation algorithms are presented: a priority rule heuristic, an algorithm based on the metaheuristic for randomized priority search and an evolutionary algorithm. Computational experiments comparing these four solution methods were performed on a set of generated benchmark problems covering a wide range of problem characteristics. The evolutionary algorithm outperformed the other methods in most cases, often significantly, and never significantly underperformed any method.

  2. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  3. On accuracy of the wave finite element predictions of wavenumbers and power flow: A benchmark problem

    NASA Astrophysics Data System (ADS)

    Søe-Knudsen, Alf; Sorokin, Sergey

    2011-06-01

    This rapid communication is concerned with justification of the 'rule of thumb', which is well known to the community of users of the finite element (FE) method in dynamics, for the accuracy assessment of the wave finite element (WFE) method. An explicit formula linking the size of a window in the dispersion diagram, where the WFE method is trustworthy, with the coarseness of a FE mesh employed is derived. It is obtained by the comparison of the exact Pochhammer-Chree solution for an elastic rod having the circular cross-section with its WFE approximations. It is shown that the WFE power flow predictions are also valid within this window.

  4. Daily personal exposure to black carbon: A pilot study

    NASA Astrophysics Data System (ADS)

    Williams, Ryan D.; Knibbs, Luke D.

    2016-05-01

    Continuous personal monitoring is the benchmark for air pollution exposure assessment. Black carbon (BC) is a strong marker of primary combustion like vehicle and biomass emissions. There have been few studies that quantified daily personal BC exposure and the contribution that different microenvironments make to it. In this pilot study, we used a portable aethalometer to measure BC concentrations in an individual's breathing zone at 30-s intervals while he performed his usual daily activities. We used a GPS and time-activity diary to track where he spent his time. We performed twenty 24-h measurements, and observed an arithmetic mean daily exposure concentration of 603 ng/m3. We estimated that changing commute modes from bus to train reduced the 24-h mean BC exposure concentration by 29%. Switching from open windows to closed windows and recirculated air in a car led to a reduction of 32%. Living in a home without a wood-fired heater caused a reduction of 50% compared with a wood-heated home. Our preliminary findings highlight the potential utility of simple approaches to reduce a person's daily BC exposure.

  5. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  6. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    PubMed

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  7. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  8. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  9. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.

    PubMed

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.

  10. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem

    PubMed Central

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171

  11. Monitoring long-range electron transfer pathways in proteins by stimulated attosecond broadband X-ray Raman spectroscopy

    DOE PAGES

    Zhang, Yu; Biggs, Jason D.; Govind, Niranjan; ...

    2014-10-09

    In this study, long-range electron transfer (ET) plays a key role in many biological energy conversion and synthesis processes. We show that nonlinear spectroscopy with attosecond X-ray pulses provides a real time movie of the evolving oxidation states and electron densities around atoms, and can probe these processes with high spatial and temporal resolution. This is demonstrated in a simulation study of the stimulated X-ray Raman (SXRS) signals in Re-modified azurin, which had long served as a benchmark for long-range ET in proteins. Nonlinear SXRS signals are sensitive to the local electronic structure and should offer a novel window formore » long-range ET.« less

  12. Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell W

    This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less

  13. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-02-21

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  14. A comparison of common programming languages used in bioinformatics.

    PubMed

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  15. An efficient pseudomedian filter for tiling microrrays.

    PubMed

    Royce, Thomas E; Carriero, Nicholas J; Gerstein, Mark B

    2007-06-07

    Tiling microarrays are becoming an essential technology in the functional genomics toolbox. They have been applied to the tasks of novel transcript identification, elucidation of transcription factor binding sites, detection of methylated DNA and several other applications in several model organisms. These experiments are being conducted at increasingly finer resolutions as the microarray technology enjoys increasingly greater feature densities. The increased densities naturally lead to increased data analysis requirements. Specifically, the most widely employed algorithm for tiling array analysis involves smoothing observed signals by computing pseudomedians within sliding windows, a O(n2logn) calculation in each window. This poor time complexity is an issue for tiling array analysis and could prove to be a real bottleneck as tiling microarray experiments become grander in scope and finer in resolution. We therefore implemented Monahan's HLQEST algorithm that reduces the runtime complexity for computing the pseudomedian of n numbers to O(nlogn) from O(n2logn). For a representative tiling microarray dataset, this modification reduced the smoothing procedure's runtime by nearly 90%. We then leveraged the fact that elements within sliding windows remain largely unchanged in overlapping windows (as one slides across genomic space) to further reduce computation by an additional 43%. This was achieved by the application of skip lists to maintaining a sorted list of values from window to window. This sorted list could be maintained with simple O(log n) inserts and deletes. We illustrate the favorable scaling properties of our algorithms with both time complexity analysis and benchmarking on synthetic datasets. Tiling microarray analyses that rely upon a sliding window pseudomedian calculation can require many hours of computation. We have eased this requirement significantly by implementing efficient algorithms that scale well with genomic feature density. This result not only speeds the current standard analyses, but also makes possible ones where many iterations of the filter may be required, such as might be required in a bootstrap or parameter estimation setting. Source code and executables are available at http://tiling.gersteinlab.org/pseudomedian/.

  16. An efficient pseudomedian filter for tiling microrrays

    PubMed Central

    Royce, Thomas E; Carriero, Nicholas J; Gerstein, Mark B

    2007-01-01

    Background Tiling microarrays are becoming an essential technology in the functional genomics toolbox. They have been applied to the tasks of novel transcript identification, elucidation of transcription factor binding sites, detection of methylated DNA and several other applications in several model organisms. These experiments are being conducted at increasingly finer resolutions as the microarray technology enjoys increasingly greater feature densities. The increased densities naturally lead to increased data analysis requirements. Specifically, the most widely employed algorithm for tiling array analysis involves smoothing observed signals by computing pseudomedians within sliding windows, a O(n2logn) calculation in each window. This poor time complexity is an issue for tiling array analysis and could prove to be a real bottleneck as tiling microarray experiments become grander in scope and finer in resolution. Results We therefore implemented Monahan's HLQEST algorithm that reduces the runtime complexity for computing the pseudomedian of n numbers to O(nlogn) from O(n2logn). For a representative tiling microarray dataset, this modification reduced the smoothing procedure's runtime by nearly 90%. We then leveraged the fact that elements within sliding windows remain largely unchanged in overlapping windows (as one slides across genomic space) to further reduce computation by an additional 43%. This was achieved by the application of skip lists to maintaining a sorted list of values from window to window. This sorted list could be maintained with simple O(log n) inserts and deletes. We illustrate the favorable scaling properties of our algorithms with both time complexity analysis and benchmarking on synthetic datasets. Conclusion Tiling microarray analyses that rely upon a sliding window pseudomedian calculation can require many hours of computation. We have eased this requirement significantly by implementing efficient algorithms that scale well with genomic feature density. This result not only speeds the current standard analyses, but also makes possible ones where many iterations of the filter may be required, such as might be required in a bootstrap or parameter estimation setting. Source code and executables are available at . PMID:17555595

  17. Investigation of prototypal MOFs consisting of polyhedral cages with accessible Lewis-acid sites for quinoline synthesis.

    PubMed

    Gao, Wen-Yang; Leng, Kunyue; Cash, Lindsay; Chrzanowski, Matthew; Stackhouse, Chavis A; Sun, Yinyong; Ma, Shengqian

    2015-03-21

    A series of prototypal metal-organic frameworks (MOFs) consisting of polyhedral cages with accessible Lewis-acid sites, have been systematically investigated for Friedländer annulation reaction, a straightforward approach to synthesizing quinoline and its derivatives. Amongst them MMCF-2 demonstrates significantly enhanced catalytic activity compared with the benchmark MOFs, HKUST-1 and MOF-505, as a result of a high-density of accessible Cu(II) Lewis acid sites and large window size in the cuboctahedral cage-based nanoreactor of MMCF-2.

  18. Characterizing system dynamics with a weighted and directed network constructed from time series data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xiaoran, E-mail: sxr0806@gmail.com; School of Mathematics and Statistics, The University of Western Australia, Crawley WA 6009; Small, Michael, E-mail: michael.small@uwa.edu.au

    In this work, we propose a novel method to transform a time series into a weighted and directed network. For a given time series, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given time series: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the timemore » series has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that time series with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics.« less

  19. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  20. Broadband laser ranging precision and accuracy experiments with PDV benchmarking

    NASA Astrophysics Data System (ADS)

    Catenacci, Jared; Daykin, Ed; Howard, Marylesa; Lalone, Brandon; Miller, Kirk

    2017-06-01

    Broadband laser ranging (BLR) is a developmental diagnostic designed to measure the precise position of surfaces and particle clouds moving at velocities of several kilometers per second. Recent single stage gas gun experiments were conducted to quantify the precision and accuracy possible with a typical BLR system. For these experiments, the position of a mirrored projectile is measured relative to the location of a stationary optical flat (uncoated window) mounted within the gun catch tank. Projectile velocity is constrained to one-dimensional motion within the gun barrel. A collimating probe is aligned to be orthogonal to both the target window and the mirrored impactor surface. The probe is used to simultaneously measure the position and velocity with a BLR and conventional Photonic Doppler Velocimetry (PDV) system. Since there is a negligible lateral component to the target velocity, coupled with strong signal returns from a mirrored surface, integrating the PDV measurement provides a high fidelity distance measurement reference to which the BLR measurement may be compared.

  1. Improved hybrid information filtering based on limited time window

    NASA Astrophysics Data System (ADS)

    Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo

    2014-12-01

    Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.

  2. Transfer Ionization Studies for Proton on He - new Inside into the World of Correlation

    NASA Astrophysics Data System (ADS)

    Schmidt-Böcking, Horst

    2005-04-01

    Correlated many-particle dynamics in Coulombic systems, which is one of the unsolved fundamental problems in AMO-physics, can now be experimentally approached with so far unprecedented completeness and precision. The recent development of the COLTRIMS technique (COLd Target Recoil Ion Momentum Spectroscopy) provides a coincident multi-fragment imaging technique for eV and sub-eV fragment detection. In its completeness it is as powerful as the bubble chamber in high energy physics. In recent benchmark experiments quasi snapshots (duration as short an atto-sec) of the correlated dynamics between electrons and nuclei has been made for atomic and molecular objects. This new imaging technique has opened a powerful observation window into the hidden world of many-particle dynamics. Recent transfer ionization studies will be presented and the direct observation of correlated electron pairs will be discussed.

  3. Charge Density Dependent Hole Mobility and Density of States Throughout the Entire Finite Potential Window of Conductivity in Ionic Liquid Gated Poly(3-hexylthiophene)

    NASA Astrophysics Data System (ADS)

    Paulsen, Bryan D.; Frisbie, C. Daniel

    2012-02-01

    Ionic liquids, used in place of traditional gate dielectric materials, allow for the accumulation of very high 2D and 3D charge densities (>10^14 #/cm^2 and >10^21 #/cm^3 respectively) at low voltage (<5 V). Here we study the electrochemical gating of the benchmark semiconducting polymer poly(3-hexylthiophene) (P3HT) with the ionic liquid 1-ethyl-3-methylimidazolium tris(pentafluoroethyl)trifluorophosphate ([EMI][FAP]). The electrochemical stability of [EMI][FAP] allowed the reproducible accumulation of 2 x 10^21 hole/cm^3, or one hole (and stabilizing anion dopant) per every two thiophene rings. A finite potential/charge density window of high electrical conductivity was observed with hole mobility reaching a maximum of 0.86 cm^2/V s at 0.12 holes per thiophene ring. Displacement current measurements, collected versus a calibrated reference electrode, allowed the mapping of the highly structured and extremely broad density of states of the P3HT/[EMI][FAP] doped composite. Variable temperature and charge density hole transport measurements revealed hole transport to be thermally activated and non-monotonic, displaying a activation energy minimum of ˜20 meV in the region of maximum conductivity and hole mobility. To show the generality of this result, the study was extended to an additional four ionic liquids and three semiconducting polymers.

  4. A benchmark for comparison of cell tracking algorithms

    PubMed Central

    Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M.W.; Karas, Pavel; Bolcková, Tereza; Štreitová, Markéta; Carthel, Craig; Coraluppi, Stefano; Harder, Nathalie; Rohr, Karl; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Dzyubachyk, Oleh; Křížek, Pavel; Hagen, Guy M.; Pastor-Escuredo, David; Jimenez-Carretero, Daniel; Ledesma-Carbayo, Maria J.; Muñoz-Barrutia, Arrate; Meijering, Erik; Kozubek, Michal; Ortiz-de-Solorzano, Carlos

    2014-01-01

    Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. Results: The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. Availability and implementation: The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. Contact: codesolorzano@unav.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24526711

  5. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  6. Measurement and validation of benchmark-quality thick-target tungsten X-ray spectra below 150 kVp.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-11-01

    Pulse-height distributions of two constant potential X-ray tubes with fixed anode tungsten targets were measured and unfolded. The measurements employed quantitative alignment of the beam, the use of two different semiconductor detectors (high-purity germanium and cadmium-zinc-telluride), two different ion chamber systems with beam-specific calibration factors, and various filter and tube potential combinations. Monte Carlo response matrices were generated for each detector for unfolding the pulse-height distributions into spectra incident on the detectors. These response matrices were validated for the low error bars assigned to the data. A significant aspect of the validation of spectra, and a detailed characterization of the X-ray tubes, involved measuring filtered and unfiltered beams at multiple tube potentials (30-150 kVp). Full corrections to ion chamber readings were employed to convert normalized fluence spectra into absolute fluence spectra. The characterization of fixed anode pitting and its dominance over exit window plating and/or detector dead layer was determined. An Appendix of tabulated benchmark spectra with assigned error ranges was developed for future reference.

  7. Scalable and cost-effective NGS genotyping in the cloud.

    PubMed

    Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P

    2015-10-15

    While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.

  8. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

    PubMed

    Graves, Alex; Schmidhuber, Jürgen

    2005-01-01

    In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.

  9. Product screening of fast reactions in IR-laser-heated liquid water filaments in a vacuum by mass spectrometry.

    PubMed

    Charvat, A; Stasicki, B; Abel, B

    2006-03-09

    In the present article a novel approach for rapid product screening of fast reactions in IR-laser-heated liquid microbeams in a vacuum is highlighted. From absorbed energies, a shock wave analysis, high-speed laser stroboscopy, and thermodynamic data of high-temperature water the enthalpy, temperature, density, pressure, and the reaction time window for the hot water filament could be characterized. The experimental conditions (30 kbar, 1750 K, density approximately 1 g/cm3) present during the lifetime of the filament (20-30 ns) were extreme and provided a unique environment for high-temperature water chemistry. For the probe of the reaction products liquid beam desorption mass spectrometry was employed. A decisive feature of the technique is that ionic species, as well as neutral products and intermediates may be detected (neutrals as protonated aggregates) via time-of-flight mass spectrometry without any additional ionization laser. After the explosive disintegration of the superheated beam, high-temperature water reactions are efficiently quenched via expansion and evaporative cooling. For first exploratory experiments for chemistry in ultrahigh-temperature, -pressure and -density water, we have chosen resorcinol as a benchmark system, simple enough and well studied in high-temperature water environments much below 1000 K. Contrary to oxidation reactions usually present under less extreme and dense supercritical conditions, we have observed hydration and little H-atom abstraction during the narrow time window of the experiment. Small amounts of radicals but no ionic intermediates other than simple proton adducts were detected. The experimental findings are discussed in terms of the energetic and dense environment and the small time window for reaction, and they provide firm evidence for additional thermal reaction channels in extreme molecular environments.

  10. Autonomous public organization policy: a case study for the health sector in Thailand.

    PubMed

    Rajataramya, B; Fried, B; van der Pütten, M; Pongpanich, S

    2009-09-01

    This paper describes factors affecting autonomous public organization (APO) policy agenda setting and policy formation through comparison of policy processes applied to one educational institute under the Ministry of Education and the other educational institute under the Ministry of Public Health in Thailand. This study employs mixed method including a qualitative approach through documentary research, in-depth interviews, and participant observation. Factors that facilitated the formulation of the APO policy were: (1) awareness of need; (2) clarity of strategies; (3) leadership, advocacy, and strategic partnerships, (4) clear organizational identity; (5) participatory approach to policy formulation, and (6) identification of a policy window. Factors that impeded the formulation of the APO policy were: (1) diverting political priorities; (2) ill-defined organizational identity; (3) fluctuating leadership direction, (4) inadequate participation of stakeholders; and (5) political instability. Although findings cannot be generalized, this case study does offer benchmarking for those in search of ways to enhance processes of policy formulation.

  11. Characterizing Detrended Fluctuation Analysis of multifractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Setty, V. A.; Sharma, A. S.

    2015-02-01

    The Hurst exponent (H) is widely used to quantify long range dependence in time series data and is estimated using several well known techniques. Recognizing its ability to remove trends the Detrended Fluctuation Analysis (DFA) is used extensively to estimate a Hurst exponent in non-stationary data. Multifractional Brownian motion (mBm) broadly encompasses a set of models of non-stationary data exhibiting time varying Hurst exponents, H(t) as against a constant H. Recently, there has been a growing interest in time dependence of H(t) and sliding window techniques have been used to estimate a local time average of the exponent. This brought to fore the ability of DFA to estimate scaling exponents in systems with time varying H(t) , such as mBm. This paper characterizes the performance of DFA on mBm data with linearly varying H(t) and further test the robustness of estimated time average with respect to data and technique related parameters. Our results serve as a bench-mark for using DFA as a sliding window estimator to obtain H(t) from time series data.

  12. MAT - MULTI-ATTRIBUTE TASK BATTERY FOR HUMAN OPERATOR WORKLOAD AND STRATEGIC BEHAVIOR RESEARCH

    NASA Technical Reports Server (NTRS)

    Comstock, J. R.

    1994-01-01

    MAT, a Multi-Attribute Task battery, gives the researcher the capability of performing multi-task workload and performance experiments. The battery provides a benchmark set of tasks for use in a wide range of laboratory studies of operator performance and workload. MAT incorporates tasks analogous to activities that aircraft crew members perform in flight, while providing a high degree of experiment control, performance data on each subtask, and freedom to use non-pilot test subjects. The MAT battery primary display is composed of four separate task windows which are as follows: a monitoring task window which includes gauges and warning lights, a tracking task window for the demands of manual control, a communication task window to simulate air traffic control communications, and a resource management task window which permits maintaining target levels on a fuel management task. In addition, a scheduling task window gives the researcher information about future task demands. The battery also provides the option of manual or automated control of tasks. The task generates performance data for each subtask. The task battery may be paused and onscreen workload rating scales presented to the subject. The MAT battery was designed to use a serially linked second computer to generate the voice messages for the Communications task. The MATREMX program and support files, which are included in the MAT package, were designed to work with the Heath Voice Card (Model HV-2000, available through the Heath Company, Benton Harbor, Michigan 49022); however, the MATREMX program and support files may easily be modified to work with other voice synthesizer or digitizer cards. The MAT battery task computer may also be used independent of the voice computer if no computer synthesized voice messages are desired or if some other method of presenting auditory messages is devised. MAT is written in QuickBasic and assembly language for IBM PC series and compatible computers running MS-DOS. The code in MAT is written for Microsoft QuickBasic 4.5 and Microsoft Macro Assembler 5.1. This package requires a joystick and EGA or VGA color graphics. An 80286, 386, or 486 processor machine is highly recommended. The standard distribution medium for MAT is a 5.25 inch 360K MS-DOS format diskette. The files are compressed using the PKZIP file compression utility. PKUNZIP is included on the distribution diskette. MAT was developed in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS, Microsoft QuickBasic, and Microsoft Macro Assembler are registered trademarks of Microsoft Corporation. PKZIP and PKUNZIP are registered trademarks of PKWare, Inc.

  13. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  14. Davisson-Germer Prize in Atomic or Surface Physics: The COLTRIMS multi-particle imaging technique-new Insight into the World of Correlation

    NASA Astrophysics Data System (ADS)

    Schmidt-Bocking, Horst

    2008-05-01

    The correlated many-particle dynamics in Coulombic systems, which is one of the unsolved fundamental problems in AMO-physics, can now be experimentally approached with so far unprecedented completeness and precision. The recent development of the COLTRIMS technique (COLd Target Recoil Ion Momentum Spectroscopy) provides a coincident multi-fragment imaging technique for eV and sub-eV fragment detection. In its completeness it is as powerful as the bubble chamber in high energy physics. In recent benchmark experiments quasi snapshots (duration as short as an atto-sec) of the correlated dynamics between electrons and nuclei has been made for atomic and molecular objects. This new imaging technique has opened a powerful observation window into the hidden world of many-particle dynamics. Recent multiple-ionization studies will be presented and the observation of correlated electron pairs will be discussed.

  15. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Gardner, R. P.; Verghese, K.

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.

  16. Telescience Resource Kit (TReK)

    NASA Technical Reports Server (NTRS)

    Lippincott, Jeff

    2015-01-01

    Telescience Resource Kit (TReK) is one of the Huntsville Operations Support Center (HOSC) remote operations solutions. It can be used to monitor and control International Space Station (ISS) payloads from anywhere in the world. It is comprised of a suite of software applications and libraries that provide generic data system capabilities and access to HOSC services. The TReK Software has been operational since 2000. A new cross-platform version of TReK is under development. The new software is being released in phases during the 2014-2016 timeframe. The TReK Release 3.x series of software is the original TReK software that has been operational since 2000. This software runs on Windows. It contains capabilities to support traditional telemetry and commanding using CCSDS (Consultative Committee for Space Data Systems) packets. The TReK Release 4.x series of software is the new cross platform software. It runs on Windows and Linux. The new TReK software will support communication using standard IP protocols and traditional telemetry and commanding. All the software listed above is compatible and can be installed and run together on Windows. The new TReK software contains a suite of software that can be used by payload developers on the ground and onboard (TReK Toolkit). TReK Toolkit is a suite of lightweight libraries and utility applications for use onboard and on the ground. TReK Desktop is the full suite of TReK software -most useful on the ground. When TReK Desktop is released, the TReK installation program will provide the option to choose just the TReK Toolkit portion of the software or the full TReK Desktop suite. The ISS program is providing the TReK Toolkit software as a generic flight software capability offered as a standard service to payloads. TReK Software Verification was conducted during the April/May 2015 timeframe. Payload teams using the TReK software onboard can reference the TReK software verification. TReK will be demonstrated on-orbit running on an ISS provided T61p laptop. Target Timeframe: September 2015 -2016. The on-orbit demonstration will collect benchmark metrics, and will be used in the future to provide live demonstrations during ISS Payload Conferences. Benchmark metrics and demonstrations will address the protocols described in SSP 52050-0047 Ku Forward section 3.3.7. (Associated term: CCSDS File Delivery Protocol (CFDP)).

  17. Reaction-time-resolved measurements of laser-induced fluorescence in a shock tube with a single laser pulse

    NASA Astrophysics Data System (ADS)

    Zabeti, S.; Fikri, M.; Schulz, C.

    2017-11-01

    Shock tubes allow for the study of ultra-fast gas-phase reactions on the microsecond time scale. Because the repetition rate of the experiments is low, it is crucial to gain as much information as possible from each individual measurement. While reaction-time-resolved species concentration and temperature measurements with fast absorption methods are established, conventional laser-induced fluorescence (LIF) measurements with pulsed lasers provide data only at a single reaction time. Therefore, fluorescence methods have rarely been used in shock-tube diagnostics. In this paper, a novel experimental concept is presented that allows reaction-time-resolved LIF measurements with one single laser pulse using a test section that is equipped with several optical ports. After the passage of the shock wave, the reactive mixture is excited along the center of the tube with a 266-nm laser beam directed through a window in the end wall of the shock tube. The emitted LIF signal is collected through elongated sidewall windows and focused onto the entrance slit of an imaging spectrometer coupled to an intensified CCD camera. The one-dimensional spatial resolution of the measurement translates into a reaction-time-resolved measurement while the species information can be gained from the spectral axis of the detected two-dimensional image. Anisole pyrolysis was selected as the benchmark reaction to demonstrate the new apparatus.

  18. Utilization of intravenous tissue plasminogen activator for ischemic stroke: are there sex differences?

    PubMed

    Allen, Norrina B; Myers, Daniela; Watanabe, Emi; Dostal, Jackie; Sama, Danny; Goldstein, Larry B; Lichtman, Judith H

    2009-01-01

    We evaluated whether there were sex-related differences in the administration of intravenous tissue plasminogen activator (IV-tPA) to patients with acute ischemic stroke admitted to US academic medical centers. Medical records were abstracted for consecutive ischemic stroke patients admitted to 32 academic medical centers from January through June, 2004, as part of the University HealthSystem Consortium Ischemic Stroke Benchmarking Project. Multivariate logistic models were used to test for sex-related differences in the receipt of IV-tPA with adjustment for demographic and clinical factors. The study included 1,234 patients (49% women; mean age 66.6 years; 56% white). IV-tPA was given to 7% (6.5% of women versus 7.5% of men, p = 0.49). Women and men were equally likely to receive IV-tPA in risk-adjusted analyses (OR 1.02, 95% CI 0.64-1.64). Approximately 77% of women and men who did not receive IV-tPA did not meet the 3-hour treatment window or their time of onset was unknown. Women admitted to academic hospitals receive IV-tPA as often as men; however, a substantial percentage of both women and men are not arriving within the 3-hour time window required for diagnostic assessment and administration of intravenous thrombolytic therapy. Additional efforts are needed to improve the rapid identification, evaluation and treatment of stroke patients.

  19. Performance Evaluation of State of the Art Systems for Physical Activity Classification of Older Subjects Using Inertial Sensors in a Real Life Scenario: A Benchmark Study

    PubMed Central

    Awais, Muhammad; Palmerini, Luca; Bourke, Alan K.; Ihlen, Espen A. F.; Helbostad, Jorunn L.; Chiari, Lorenzo

    2016-01-01

    The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back. The results, similarly to the multi-sensor setup, indicate substantial degradation of the performance when laboratory-trained systems are tested in the real-life setting. This degradation is higher than in the multi-sensor setup. Still, the performance provided by the single-sensor approach, when trained and tested with real data, can be acceptable (with an accuracy above 80%). PMID:27973434

  20. Using physiologically based pharmacokinetic modeling and benchmark dose methods to derive an occupational exposure limit for N-methylpyrrolidone.

    PubMed

    Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R

    2016-04-01

    The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Benchmark Study of Global Clean Energy Manufacturing | Advanced

    Science.gov Websites

    Manufacturing Research | NREL Benchmark Study of Global Clean Energy Manufacturing Benchmark Study of Global Clean Energy Manufacturing Through a first-of-its-kind benchmark study, the Clean Energy Technology End Product.' The study examined four clean energy technologies: wind turbine components

  2. A comparison of common programming languages used in bioinformatics

    PubMed Central

    Fourment, Mathieu; Gillings, Michael R

    2008-01-01

    Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993

  3. Optical properties of mineral dust aerosol in the thermal infrared

    NASA Astrophysics Data System (ADS)

    Köhler, Claas H.

    2017-02-01

    The optical properties of mineral dust and biomass burning aerosol in the thermal infrared (TIR) are examined by means of Fourier Transform Infrared Spectrometer (FTIR) measurements and radiative transfer (RT) simulations. The measurements were conducted within the scope of the Saharan Mineral Dust Experiment 2 (SAMUM-2) at Praia (Cape Verde) in January and February 2008. The aerosol radiative effect in the TIR atmospheric window region 800-1200 cm-1 (8-12 µm) is discussed in two case studies. The first case study employs a combination of IASI measurements and RT simulations to investigate a lofted optically thin biomass burning layer with emphasis on its potential influence on sea surface temperature (SST) retrieval. The second case study uses ground based measurements to establish the importance of particle shape and refractive index for benchmark RT simulations of dust optical properties in the TIR domain. Our research confirms earlier studies suggesting that spheroidal model particles lead to a significantly improved agreement between RT simulations and measurements compared to spheres. However, room for improvement remains, as the uncertainty originating from the refractive index data for many aerosol constituents prohibits more conclusive results.

  4. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  5. Fuzzy CMAC With incremental Bayesian Ying-Yang learning and dynamic rule construction.

    PubMed

    Nguyen, M N

    2010-04-01

    Inspired by the philosophy of ancient Chinese Taoism, Xu's Bayesian ying-yang (BYY) learning technique performs clustering by harmonizing the training data (yang) with the solution (ying). In our previous work, the BYY learning technique was applied to a fuzzy cerebellar model articulation controller (FCMAC) to find the optimal fuzzy sets; however, this is not suitable for time series data analysis. To address this problem, we propose an incremental BYY learning technique in this paper, with the idea of sliding window and rule structure dynamic algorithms. Three contributions are made as a result of this research. First, an online expectation-maximization algorithm incorporated with the sliding window is proposed for the fuzzification phase. Second, the memory requirement is greatly reduced since the entire data set no longer needs to be obtained during the prediction process. Third, the rule structure dynamic algorithm with dynamically initializing, recruiting, and pruning rules relieves the "curse of dimensionality" problem that is inherent in the FCMAC. Because of these features, the experimental results of the benchmark data sets of currency exchange rates and Mackey-Glass show that the proposed model is more suitable for real-time streaming data analysis.

  6. Modified artificial bee colony for the vehicle routing problems with time windows.

    PubMed

    Alzaqebah, Malek; Abdullah, Salwani; Jawarneh, Sana

    2016-01-01

    The natural behaviour of the honeybee has attracted the attention of researchers in recent years and several algorithms have been developed that mimic swarm behaviour to solve optimisation problems. This paper introduces an artificial bee colony (ABC) algorithm for the vehicle routing problem with time windows (VRPTW). A Modified ABC algorithm is proposed to improve the solution quality of the original ABC. The high exploration ability of the ABC slows-down its convergence speed, which may due to the mechanism used by scout bees in replacing abandoned (unimproved) solutions with new ones. In the Modified ABC a list of abandoned solutions is used by the scout bees to memorise the abandoned solutions, then the scout bees select a solution from the list based on roulette wheel selection and replace by a new solution with random routs selected from the best solution. The performance of the Modified ABC is evaluated on Solomon benchmark datasets and compared with the original ABC. The computational results demonstrate that the Modified ABC outperforms the original ABC also produce good solutions when compared with the best-known results in the literature. Computational investigations show that the proposed algorithm is a good and promising approach for the VRPTW.

  7. Investigation and Prediction of RF Window Performance in APT Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphries, S. Jr.

    1997-05-01

    The work described in this report was performed between November 1996 and May 1997 in support of the APT (Accelerator Production of Tritium) Program at Los Alamos National Laboratory. The goal was to write and to test computer programs for charged particle orbits in RF fields. The well-documented programs were written in portable form and compiled for standard personal computers for easy distribution to LANL researchers. They will be used in several APT applications including the following. Minimization of multipactor effects in the moderate {beta} superconducting linac cavities under design for the APT accelerator. Investigation of suppression techniques for electronmore » multipactoring in high-power RF feedthroughs. Modeling of the response of electron detectors for the protection of high power RF vacuum windows. In the contract period two new codes, Trak{_}RF and WaveSim, were completed and several critical benchmark etests were carried out. Trak{_}RF numerically tracks charged particle orbits in combined electrostatic, magnetostatic and electromagnetic fields. WaveSim determines frequency-domain RF field solutions and provides a key input to Trak{_}RF. The two-dimensional programs handle planar or cylindrical geometries. They have several unique characteristics.« less

  8. Evaluation of the efficacy and safety of rivaroxaban using a computer model for blood coagulation.

    PubMed

    Burghaus, Rolf; Coboeken, Katrin; Gaub, Thomas; Kuepfer, Lars; Sensse, Anke; Siegmund, Hans-Ulrich; Weiss, Wolfgang; Mueck, Wolfgang; Lippert, Joerg

    2011-04-22

    Rivaroxaban is an oral, direct Factor Xa inhibitor approved in the European Union and several other countries for the prevention of venous thromboembolism in adult patients undergoing elective hip or knee replacement surgery and is in advanced clinical development for the treatment of thromboembolic disorders. Its mechanism of action is antithrombin independent and differs from that of other anticoagulants, such as warfarin (a vitamin K antagonist), enoxaparin (an indirect thrombin/Factor Xa inhibitor) and dabigatran (a direct thrombin inhibitor). A blood coagulation computer model has been developed, based on several published models and preclinical and clinical data. Unlike previous models, the current model takes into account both the intrinsic and extrinsic pathways of the coagulation cascade, and possesses some unique features, including a blood flow component and a portfolio of drug action mechanisms. This study aimed to use the model to compare the mechanism of action of rivaroxaban with that of warfarin, and to evaluate the efficacy and safety of different rivaroxaban doses with other anticoagulants included in the model. Rather than reproducing known standard clinical measurements, such as the prothrombin time and activated partial thromboplastin time clotting tests, the anticoagulant benchmarking was based on a simulation of physiologically plausible clotting scenarios. Compared with warfarin, rivaroxaban showed a favourable sensitivity for tissue factor concentration inducing clotting, and a steep concentration-effect relationship, rapidly flattening towards higher inhibitor concentrations, both suggesting a broad therapeutic window. The predicted dosing window is highly accordant with the final dose recommendation based upon extensive clinical studies.

  9. Evaluation of the Efficacy and Safety of Rivaroxaban Using a Computer Model for Blood Coagulation

    PubMed Central

    Burghaus, Rolf; Coboeken, Katrin; Gaub, Thomas; Kuepfer, Lars; Sensse, Anke; Siegmund, Hans-Ulrich; Weiss, Wolfgang; Mueck, Wolfgang; Lippert, Joerg

    2011-01-01

    Rivaroxaban is an oral, direct Factor Xa inhibitor approved in the European Union and several other countries for the prevention of venous thromboembolism in adult patients undergoing elective hip or knee replacement surgery and is in advanced clinical development for the treatment of thromboembolic disorders. Its mechanism of action is antithrombin independent and differs from that of other anticoagulants, such as warfarin (a vitamin K antagonist), enoxaparin (an indirect thrombin/Factor Xa inhibitor) and dabigatran (a direct thrombin inhibitor). A blood coagulation computer model has been developed, based on several published models and preclinical and clinical data. Unlike previous models, the current model takes into account both the intrinsic and extrinsic pathways of the coagulation cascade, and possesses some unique features, including a blood flow component and a portfolio of drug action mechanisms. This study aimed to use the model to compare the mechanism of action of rivaroxaban with that of warfarin, and to evaluate the efficacy and safety of different rivaroxaban doses with other anticoagulants included in the model. Rather than reproducing known standard clinical measurements, such as the prothrombin time and activated partial thromboplastin time clotting tests, the anticoagulant benchmarking was based on a simulation of physiologically plausible clotting scenarios. Compared with warfarin, rivaroxaban showed a favourable sensitivity for tissue factor concentration inducing clotting, and a steep concentration–effect relationship, rapidly flattening towards higher inhibitor concentrations, both suggesting a broad therapeutic window. The predicted dosing window is highly accordant with the final dose recommendation based upon extensive clinical studies. PMID:21526168

  10. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  12. Needs and opportunities for CFD-code validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, B.L.

    1996-06-01

    The conceptual design for the ESS target consists of a horizontal cylinder containing a liquid metal - mercury is considered in the present study - which circulates by forced convection and carries away the waste heat generated by the spallation reactions. The protons enter the target via a beam window, which must withstand the thermal, mechanical and radiation loads to which it is subjected. For a beam power of 5MW, it is estimated that about 3.3MW of waste heat would be deposited in the target material and associated structures. it is intended to confirm, by detailed thermal-hydraulics calculations, that amore » convective flow of the liquid metal target material can effectively remove the waste heat. The present series of Computational Fluid Dynamics (CFD) calculations has indicated that a single-inlet Target design leads to excessive local overheating, but a multiple-inlet design, is coolable. With this option, inlet flow streams, two from the sides and one from below, merge over the target window, cooling the window itself in crossflow and carrying away the heat generated volumetrically in the mercury with a strong axial flow down the exit channel. The three intersecting streams form a complex, three-dimensional, swirling flow field in which critical heat transfer processes are taking place. In order to produce trustworthy code simulations, it is necessary that the mesh resolution is adequate for the thermal-hydraulic conditions encountered and that the physical models used by the code are appropriate to the fluid dynamic environment. The former relies on considerable user experience in the application of the code, and the latter assurance is best gained in the context of controlled benchmark activities where measured data are available. Such activities will serve to quantify the accuracy of given models and to identify potential problem area for the numerical simulation which may not be obvious from global heat and mass balance considerations.« less

  13. Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research

    NASA Technical Reports Server (NTRS)

    Arnegard, Ruth J.; Comstock, J. R., Jr.

    1991-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  14. The multi-attribute task battery for human operator workload and strategic behavior research

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Arnegard, Ruth J.

    1992-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  15. Benchmarking pre-spawning fitness, climate preferendum of some catfishes from river Ganga and its proposed utility in climate research.

    PubMed

    Sarkar, Uttam Kumar; Naskar, Malay; Roy, Koushik; Sudeeshan, Deepa; Srivastava, Pankaj; Gupta, Sandipan; Bose, Arun Kumar; Verma, Vinod Kumar; Sarkar, Soma Das; Karnatak, Gunjan; Nandy, Saurav Kumar

    2017-09-07

    The concept of threshold condition factor (Fulton), beyond which more than 50% of the female fish population may attain readiness for spawning coined as pre-spawning fitness (K spawn50 ), has been proposed in the present article and has been estimated by applying the non-parametric Kaplan-Meier method for fitting survival function. A binary coding strategy of gonadal maturity stages was used to classify whether a female fish is "ready to spawn" or not. The proposed K spawn50 has been generated for female Mystus tengara (1.13-1.21 units), M. cavasius (0.846-0.945 units), and Eutropiichthys vacha (0.716-0.799 units). Information on the range of egg parameters (fecundity, egg weight, egg diameter) expected at the pre-spawning stage was also generated. Additional information on species-specific thermal and precipitation window (climate preferendum) within which K spawn50 is attained was also generated through the LOESS smoothing technique. Water temperatures between 31 and 36 °C (M. tengara), 30 and 32 °C (M. cavasius), and 29.5 and 31 °C (E. vacha) and monthly rainfall between 200 and 325 mm (M. tengara), > 250 mm (M. cavasius), and around 50 mm and between 350 and 850 mm (E. vacha) were found to be optimum for attainment of K spawn50 . The importance of parameterization and benchmarking of K spawn50 in addition to other conventional reproductive biology parameters has been discussed in the present article. The purposes of the present study were fulfilled by generating baseline information and similar information may be generated for other species replicating the innovative methodology used in this study.

  16. Resistive switching near electrode interfaces: Estimations by a current model

    NASA Astrophysics Data System (ADS)

    Schroeder, Herbert; Zurhelle, Alexander; Stemmer, Stefanie; Marchewka, Astrid; Waser, Rainer

    2013-02-01

    The growing resistive switching database is accompanied by many detailed mechanisms which often are pure hypotheses. Some of these suggested models can be verified by checking their predictions with the benchmarks of future memory cells. The valence change memory model assumes that the different resistances in ON and OFF states are made by changing the defect density profiles in a sheet near one working electrode during switching. The resulting different READ current densities in ON and OFF states were calculated by using an appropriate simulation model with variation of several important defect and material parameters of the metal/insulator (oxide)/metal thin film stack such as defect density and its profile change in density and thickness, height of the interface barrier, dielectric permittivity, applied voltage. The results were compared to the benchmarks and some memory windows of the varied parameters can be defined: The required ON state READ current density of 105 A/cm2 can only be achieved for barriers smaller than 0.7 eV and defect densities larger than 3 × 1020 cm-3. The required current ratio between ON and OFF states of at least 10 requests defect density reduction of approximately an order of magnitude in a sheet of several nanometers near the working electrode.

  17. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  18. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.F.; Kristal, J.; Thompson, G.

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less

  20. A Causal-Comparative Study of the Affects of Benchmark Assessments on Middle Grades Science Achievement Scores

    ERIC Educational Resources Information Center

    Galloway, Melissa Ritchie

    2016-01-01

    The purpose of this causal comparative study was to test the theory of assessment that relates benchmark assessments to the Georgia middle grades science Criterion Referenced Competency Test (CRCT) percentages, controlling for schools who do not administer benchmark assessments versus schools who do administer benchmark assessments for all middle…

  1. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres.

    PubMed

    van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H

    2010-08-31

    Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.

  2. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408

  3. A pre-crisis vs. crisis analysis of peripheral EU stock markets by means of wavelet transform and a nonlinear causality test

    NASA Astrophysics Data System (ADS)

    Polanco-Martínez, J. M.; Fernández-Macho, J.; Neumann, M. B.; Faria, S. H.

    2018-01-01

    This paper presents an analysis of EU peripheral (so-called PIIGS) stock market indices and the S&P Europe 350 index (SPEURO), as a European benchmark market, over the pre-crisis (2004-2007) and crisis (2008-2011) periods. We computed a rolling-window wavelet correlation for the market returns and applied a non-linear Granger causality test to the wavelet decomposition coefficients of these stock market returns. Our results show that the correlation is stronger for the crisis than for the pre-crisis period. The stock market indices from Portugal, Italy and Spain were more interconnected among themselves during the crisis than with the SPEURO. The stock market from Portugal is the most sensitive and vulnerable PIIGS member, whereas the stock market from Greece tends to move away from the European benchmark market since the 2008 financial crisis till 2011. The non-linear causality test indicates that in the first three wavelet scales (intraweek, weekly and fortnightly) the number of uni-directional and bi-directional causalities is greater during the crisis than in the pre-crisis period, because of financial contagion. Furthermore, the causality analysis shows that the direction of the Granger cause-effect for the pre-crisis and crisis periods is not invariant in the considered time-scales, and that the causality directions among the studied stock markets do not seem to have a preferential direction. These results are relevant to better understand the behaviour of vulnerable stock markets, especially for investors and policymakers.

  4. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  5. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  6. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    NASA Astrophysics Data System (ADS)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  7. An Open-Source Standard T-Wave Alternans Detector for Benchmarking.

    PubMed

    Khaustov, A; Nemati, S; Clifford, Gd

    2008-09-14

    We describe an open source algorithm suite for T-Wave Alternans (TWA) detection and quantification. The software consists of Matlab implementations of the widely used Spectral Method and Modified Moving Average with libraries to read both WFDB and ASCII data under windows and Linux. The software suite can run in both batch mode and with a provided graphical user interface to aid waveform exploration. Our software suite was calibrated using an open source TWA model, described in a partner paper [1] by Clifford and Sameni. For the PhysioNet/CinC Challenge 2008 we obtained a score of 0.881 for the Spectral Method and 0.400 for the MMA method. However, our objective was not to provide the best TWA detector, but rather a basis for detailed discussion of algorithms.

  8. Apparatus for the investigation of high-temperature, high-pressure gas-phase heterogeneous catalytic and photo-catalytic materials.

    PubMed

    Alvino, Jason F; Bennett, Trystan; Kler, Rantej; Hudson, Rohan J; Aupoil, Julien; Nann, Thomas; Golovko, Vladimir B; Andersson, Gunther G; Metha, Gregory F

    2017-05-01

    A high-temperature, high-pressure, pulsed-gas sampling and detection system has been developed for testing new catalytic and photocatalytic materials for the production of solar fuels. The reactor is fitted with a sapphire window to allow the irradiation of photocatalytic samples from a lamp or solar simulator light source. The reactor has a volume of only 3.80 ml allowing for the investigation of very small quantities of a catalytic material, down to 1 mg. The stainless steel construction allows the cell to be heated to 350 °C and can withstand pressures up to 27 bar, limited only by the sapphire window. High-pressure sampling is made possible by a computer controlled pulsed valve that delivers precise gas flow, enabling catalytic reactions to be monitored across a wide range of pressures. A residual gas analyser mass spectrometer forms a part of the detection system, which is able to provide a rapid, real-time analysis of the gas composition within the photocatalytic reaction chamber. This apparatus is ideal for investigating a number of industrially relevant reactions including photocatalytic water splitting and CO 2 reduction. Initial catalytic results using Pt-doped and Ru nanoparticle-doped TiO 2 as benchmark experiments are presented.

  9. Full Chain Benchmarking for Open Architecture Airborne ISR Systems: A Case Study for GMTI Radar Applications

    DTIC Science & Technology

    2015-09-15

    middleware implementations via a common object-oriented software hierarchy, with library -specific implementations of the five GMTI benchmark ...Full-Chain Benchmarking for Open Architecture Airborne ISR Systems A Case Study for GMTI Radar Applications Matthias Beebe, Matthew Alexander...time performance, effective benchmarks are necessary to ensure that an ARP system can meet the mission constraints and performance requirements of

  10. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  11. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  12. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  13. [The OPTIMISE study (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment]. Results for Luxembourg].

    PubMed

    Michel, G

    2012-01-01

    The OPTIMISE study (NCT00681850) has been run in six European countries, including Luxembourg, to prospectively assess the effect of benchmarking on the quality of primary care in patients with type 2 diabetes, using major modifiable vascular risk factors as critical quality indicators. Primary care centers treating type 2 diabetic patients were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). Primary endpoint was percentage of patients in the benchmarking group achieving pre-set targets of the critical quality indicators: glycated hemoglobin (HbAlc), systolic blood pressure (SBP) and low-density lipoprotein (LDL) cholesterol after 12 months follow-up. In Luxembourg, in the benchmarking group, more patients achieved target for SBP (40.2% vs. 20%) and for LDL-cholesterol (50.4% vs. 44.2%). 12.9% of patients in the benchmarking group met all three targets compared with patients in the control group (8.3%). In this randomized, controlled study, benchmarking was shown to be an effective tool for improving critical quality indicator targets, which are the principal modifiable vascular risk factors in diabetes type 2.

  14. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  15. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  16. Rigorous Numerical Study of Low-Period Windows for the Quadratic Map

    NASA Astrophysics Data System (ADS)

    Galias, Zbigniew

    An efficient method to find all low-period windows for the quadratic map is proposed. The method is used to obtain very accurate rigorous bounds of positions of all periodic windows with periods p ≤ 32. The contribution of period-doubling windows on the total width of periodic windows is discussed. Properties of periodic windows are studied numerically.

  17. Pilot case-control study of paediatric falls from windows.

    PubMed

    Johnston, Brian D; Quistberg, D Alexander; Shandro, Jamie R; Partridge, Rebecca L; Song, Hyun Rae; Ebel, Beth E

    2011-12-01

    Unintentional falls from windows are an important cause of paediatric morbidity. There have been no controlled studies to identify modifiable environmental risk factors for window falls in young children. The authors have piloted a case-control study to test procedures for case identification, subject enrolment, and environmental data collection. Case windows were identified when a child 0-9 years old presented for care after a fall from that window. Control windows were identified (1) from the child's home and (2) from the home of an age- and gender-matched child seeking care for an injury diagnosis not related to a window fall. Study staff visited enrolled homes to collect window measurements and conduct window screen performance tests. The authors enrolled and collected data on 18 case windows, 18 in-home controls, and 14 matched community controls. Six potential community controls were contacted for every one enrolled. Families who completed the home visit viewed study procedures positively. Case windows were more likely than community controls to be horizontal sliders (100% vs 50%), to have deeper sills (6.28 vs 4.31 inches), to be higher above the exterior surface (183 vs 82 inches), and to have screens that failed below a threshold derived from the static pressure of a 3-year-old leaning against the mesh (60.0% vs 16.7%). Case windows varied very little from in-home controls. Case-control methodology can be used to study risk factors for paediatric falls from windows. Recruitment of community controls is challenging but essential, because in-home controls tend to be over-matched on important variables. A home visit allows direct measurement of window type, height, sill depth, and screen performance. These variables should all be investigated in subsequent, larger studies covering major housing markets.

  18. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  19. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  20. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    PubMed Central

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  1. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  2. Operationalizing the Rubric: The Effect of Benchmark Selection on the Assessed Quality of Writing.

    ERIC Educational Resources Information Center

    Popp, Sharon E. Osborn; Ryan, Joseph M.; Thompson, Marilyn S.; Behrens, John T.

    The purposes of this study were to investigate the role of benchmark writing samples in direct assessment of writing and to examine the consequences of differential benchmark selection with a common writing rubric. The influences of discourse and grade level were also examined within the context of differential benchmark selection. Raters scored…

  3. VTOL shipboard letdown guidance system analysis

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Karmali, M. S.

    1983-01-01

    Alternative letdown guidance strategies are examined for landing of a VTOL aircraft onboard a small aviation ship under adverse environmental conditions. Off line computer simulation of shipboard landing task is utilized for assessing the relative merits of the proposed guidance schemes. The touchdown performance of a nominal constant rate of descent (CROD) letdown strategy serves as a benchmark for ranking the performance of the alternative letdown schemes. Analysis of ship motion time histories indicates the existence of an alternating sequence of quiescent and rough motions called lulls and swells. A real time algorithms lull/swell classification based upon ship motion pattern features is developed. The classification algorithm is used to command a go/no go signal to indicate the initiation and termination of an acceptable landing window. Simulation results show that such a go/no go pattern based letdown guidance strategy improves touchdown performance.

  4. MCNP4A: Features and philosophy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.

    This paper describes MCNP, states its philosophy, introduces a number of new features becoming available with version MCNP4A, and answers a number of questions asked by participants in the workshop. MCNP is a general-purpose three-dimensional neutron, photon and electron transport code. Its philosophy is ``Quality, Value and New Features.`` Quality is exemplified by new software quality assurance practices and a program of benchmarking against experiments. Value includes a strong emphasis on documentation and code portability. New features are the third priority. MCNP4A is now available at Los Alamos. New features in MCNP4A include enhanced statistical analysis, distributed processor multitasking, newmore » photon libraries, ENDF/B-VI capabilities, X-Windows graphics, dynamic memory allocation, expanded criticality output, periodic boundaries, plotting of particle tracks via SABRINA, and many other improvements. 23 refs.« less

  5. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  6. Evaluation of Energy Efficiency Performance of Heated Windows

    NASA Astrophysics Data System (ADS)

    Jammulamadaka, Hari Swarup

    The study about the evaluation of the performance of the heated windows was funded by the WVU Research Office as a technical assistance award at the 2014 TransTech Energy Business Development Conference to the Green Heated Glass company/project owned by Frank Dlubak. The award supports a WVU researcher to conduct a project important for commercialization. This project was awarded to the WVU Industrial Assessment Center in 2015. The current study attempted to evaluate the performance of the heated windows by developing an experimental setup to test the window at various temperatures by varying the current input to the window. The heated double pane window was installed in an insulated box. A temperature gradient was developed across the window by cooling one side of the window using gel based ice packs. The other face of the window was heated by passing current at different wattages through the window. The temperature of the inside and outside panes, current and voltage input, room and box temperature were recorded, and used to calculate the apparent R-value of the window when not being heated vs when being heated. It has been concluded from the study that the heated double pane window is more effective in reducing heat losses by as much as 50% than a non-heated double pane window, if the window temperature is maintained close to the room temperature. If the temperature of the window is much higher than the room temperature, the losses through the window appear to increase beyond that of a non-heated counterpart. The issues encountered during the current round of experiments are noted, and recommendations provided for future studies.

  7. Metallization of Various Polymers by Cold Spray

    NASA Astrophysics Data System (ADS)

    Che, Hanqing; Chu, Xin; Vo, Phuong; Yue, Stephen

    2018-01-01

    Previous results have shown that metallic coatings can be successfully cold sprayed onto polymeric substrates. This paper studies the cold sprayability of various metal powders on different polymeric substrates. Five different substrates were used, including carbon fiber reinforced polymer (CFRP), acrylonitrile butadiene styrene (ABS), polyether ether ketone (PEEK), polyethylenimine (PEI); mild steel was also used as a benchmark substrate. The CFRP used in this work has a thermosetting matrix, and the ABS, PEEK and PEI are all thermoplastic polymers, with different glass transition temperatures as well as a number of distinct mechanical properties. Three metal powders, tin, copper and iron, were cold sprayed with both a low-pressure system and a high-pressure system at various conditions. In general, cold spray on the thermoplastic polymers rendered more positive results than the thermosetting polymers, due to the local thermal softening mechanism in the thermoplastics. Thick copper coatings were successfully deposited on PEEK and PEI. Based on the results, a method is proposed to determine the feasibility and deposition window of cold spraying specific metal powder/polymeric substrate combinations.

  8. Benchmarking biology research organizations using a new, dedicated tool.

    PubMed

    van Harten, Willem H; van Bokhorst, Leonard; van Luenen, Henri G A M

    2010-02-01

    International competition forces fundamental research organizations to assess their relative performance. We present a benchmark tool for scientific research organizations where, contrary to existing models, the group leader is placed in a central position within the organization. We used it in a pilot benchmark study involving six research institutions. Our study shows that data collection and data comparison based on this new tool can be achieved. It proved possible to compare relative performance and organizational characteristics and to generate suggestions for improvement for most participants. However, strict definitions of the parameters used for the benchmark and a thorough insight into the organization of each of the benchmark partners is required to produce comparable data and draw firm conclusions.

  9. The sonic window: second generation results

    NASA Astrophysics Data System (ADS)

    Walker, William F.; Fuller, Michael I.; Brush, Edward V.; Eames, Matthew D. C.; Owen, Kevin; Ranganathan, Karthik; Blalock, Travis N.; Hossack, John A.

    2006-03-01

    Medical Ultrasound Imaging is widely used clinically because of its relatively low cost, portability, lack of ionizing radiation, and real-time nature. However, even with these advantages ultrasound has failed to permeate the broad array of clinical applications where its use could be of value. A prime example of this untapped potential is the routine use of ultrasound to guide intravenous access. In this particular application existing systems lack the required portability, low cost, and ease-of-use required for widespread acceptance. Our team has been working for a number of years to develop an extremely low-cost, pocket-sized, and intuitive ultrasound imaging system that we refer to as the "Sonic Window." We have previously described the first generation Sonic Window prototype that was a bench-top device using a 1024 element, fully populated array operating at a center frequency of 3.3 MHz. Through a high degree of custom front-end integration combined with multiplexing down to a 2 channel PC based digitizer this system acquired a full set of RF data over a course of 512 transmit events. While initial results were encouraging, this system exhibited limitations resulting from low SNR, relatively coarse array sampling, and relatively slow data acquisition. We have recently begun assembling a second-generation Sonic Window system. This system uses a 3600 element fully sampled array operating at 5.0 MHz with a 300 micron element pitch. This system extends the integration of the first generation system to include front-end protection, pre-amplification, a programmable bandpass filter, four sample and holds, and four A/D converters for all 3600 channels in a set of custom integrated circuits with a combined area smaller than the 1.8 x 1.8 cm footprint of the transducer array. We present initial results from this front-end and present benchmark results from a software beamformer implemented on the Analog Devices BF-561 DSP. We discuss our immediate plans for further integration and testing. This second prototype represents a major reduction in size and forms the foundation of a fully functional, fully integrated, pocket sized prototype.

  10. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  11. Experimental depth dose curves of a 67.5 MeV proton beam for benchmarking and validation of Monte Carlo simulation

    PubMed Central

    Faddegon, Bruce A.; Shin, Jungwook; Castenada, Carlos M.; Ramos-Méndez, José; Daftari, Inder K.

    2015-01-01

    Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with geant4 using topas. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the thicker foil. The treatment beam, being unfocussed, had a broader Bragg peak than the raw beam. A 1.3 ± 0.1 MeV FWHM peak width in the energy distribution was used in the simulation to match the Bragg peak width. An additional 1.3–2.24 mm of water in the water column was required over the nominal values to match the measured depth penetration. Conclusions: The proton Bragg curve measured for the 0.1016 mm thick Ta foil provided the most accurate benchmark, having a low contribution of proton scatter from upstream of the water tank. The accuracy was 0.15% in measured beam energy and 0.3% in measured depth penetration at the Bragg peak. The depth of the distal edge of the Bragg peak in the simulation fell short of measurement, suggesting that the mean ionization potential of water is 2–5 eV higher than the 78 eV used in the stopping power calculation for the simulation. The eye treatment beam line depth dose curves provide validation of Monte Carlo simulation of a Bragg curve and SOBP with 4%/2 mm accuracy. PMID:26133619

  12. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  13. 47 CFR 54.805 - Zone and study area above benchmark revenues calculated by the Administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Period Residential and Single-Line Business Lines times 12. If negative, the Zone Above Benchmark...) multiplied by all eligible telecommunications carrier zone Base Period Multi-line Business Lines times 12. If... 47 Telecommunication 3 2010-10-01 2010-10-01 false Zone and study area above benchmark revenues...

  14. Effects of sitting versus standing and scanner type on cashiers.

    PubMed

    Lehman, K R; Psihogios, J P; Meulenbroek, R G

    2001-06-10

    In the retail supermarket industry where cashiers perform repetitive, light manual material-handling tasks when scanning and handling products, reports of musculoskeletal disorders and discomfort are high. Ergonomics tradeoffs exist between sitting and standing postures, which are further confounded by the checkstand design and point-of-sale technology, such as the scanner. A laboratory experiment study was conducted to understand the effects of working position (sitting versus standing) and scanner type (bi-optic versus single window) on muscle activity, upper limb and spinal posture, and subjective preference of cashiers. Ten cashiers from a Dutch retailer participated in the study. Cashiers exhibited lower muscle activity in the neck and shoulders when standing and using a bi-optic scanner. Shoulder abduction was also less for standing conditions. In addition, all cashiers preferred using the bi-optic scanner with mixed preferences for sitting (n = 6) and standing (n = 4). Static loading of the muscles was relatively high compared with benchmarks, suggesting that during the task of scanning, cashiers may not have adequate recovery time to prevent fatigue. It is recommended that retailers integrate bi-optic scanners into standing checkstands to minimize postural stress, fatigue and discomfort in cashiers.

  15. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  16. Oval Window Size and Shape: a Micro-CT Anatomical Study With Considerations for Stapes Surgery.

    PubMed

    Zdilla, Matthew J; Skrzat, Janusz; Kozerska, Magdalena; Leszczyński, Bartosz; Tarasiuk, Jacek; Wroński, Sebastian

    2018-06-01

    The oval window is an important structure with regard to stapes surgeries, including stapedotomy for the treatment of otosclerosis. Recent study of perioperative imaging of the oval window has revealed that oval window niche height can indicate both operative difficulty and subjective discomfort during otosclerosis surgery. With regard to shape, structures incorporated into the oval window niche, such as cartilage grafts, must be compatible with the shape of the oval window. Despite the clinical importance of the oval window, there is little information regarding its size and shape. This study assessed oval window size and shape via micro-computed tomography paired with modern morphometric methodology in the fetal, infant, child, and adult populations. Additionally, the study compared oval window size and shape between sexes and between left- and right-sided ears. No significant differences were found among traditional morphometric parameters among age groups, sides, or sexes. However, geometric morphometric methods revealed shape differences between age groups. Further, geometric morphometric methods provided the average oval window shape and most-likely shape variance. Beyond demonstrating oval window size and shape variation, the results of this report will aid in identifying patients among whom anatomical variation may contribute to surgical difficulty and surgeon discomfort, or otherwise warrant preoperative adaptations for the incorporation of materials into and around the oval window.

  17. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis

    PubMed Central

    2011-01-01

    Background A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. Results The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. Conclusions With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites. PMID:21266047

  18. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.

    PubMed

    Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi

    2011-01-25

    A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.

  19. [Benchmarking of university trauma centers in Germany. Research and teaching].

    PubMed

    Gebhard, F; Raschke, M; Ruchholtz, S; Meffert, R; Marzi, I; Pohlemann, T; Südkamp, N; Josten, C; Zwipp, H

    2011-07-01

    Benchmarking is a very popular business process and meanwhile is used in research as well. The aim of the present study is to elucidate key numbers of German university trauma departments regarding research and teaching. The data set is based upon the monthly reports given by the administration in each university. As a result the study shows that only well-known parameters such as fund-raising and impact factors can be used to benchmark university-based trauma centers. The German federal system does not allow a nationwide benchmarking.

  20. Utilizing Benchmarking to Study the Effectiveness of Parent-Child Interaction Therapy Implemented in a Community Setting

    ERIC Educational Resources Information Center

    Self-Brown, Shannon; Valente, Jessica R.; Wild, Robert C.; Whitaker, Daniel J.; Galanter, Rachel; Dorsey, Shannon; Stanley, Jenelle

    2012-01-01

    Benchmarking is a program evaluation approach that can be used to study whether the outcomes of parents/children who participate in an evidence-based program in the community approximate the outcomes found in randomized trials. This paper presents a case illustration using benchmarking methodology to examine a community implementation of…

  1. Computed Tomography Window Blending: Feasibility in Thoracic Trauma.

    PubMed

    Mandell, Jacob C; Wortman, Jeremy R; Rocha, Tatiana C; Folio, Les R; Andriole, Katherine P; Khurana, Bharti

    2018-02-07

    This study aims to demonstrate the feasibility of processing computed tomography (CT) images with a custom window blending algorithm that combines soft-tissue, bone, and lung window settings into a single image; to compare the time for interpretation of chest CT for thoracic trauma with window blending and conventional window settings; and to assess diagnostic performance of both techniques. Adobe Photoshop was scripted to process axial DICOM images from retrospective contrast-enhanced chest CTs performed for trauma with a window-blending algorithm. Two emergency radiologists independently interpreted the axial images from 103 chest CTs with both blended and conventional windows. Interpretation time and diagnostic performance were compared with Wilcoxon signed-rank test and McNemar test, respectively. Agreement with Nexus CT Chest injury severity was assessed with the weighted kappa statistic. A total of 13,295 images were processed without error. Interpretation was faster with window blending, resulting in a 20.3% time saving (P < .001), with no difference in diagnostic performance, within the power of the study to detect a difference in sensitivity of 5% as determined by post hoc power analysis. The sensitivity of the window-blended cases was 82.7%, compared to 81.6% for conventional windows. The specificity of the window-blended cases was 93.1%, compared to 90.5% for conventional windows. All injuries of major clinical significance (per Nexus CT Chest criteria) were correctly identified in all reading sessions, and all negative cases were correctly classified. All readers demonstrated near-perfect agreement with injury severity classification with both window settings. In this pilot study utilizing retrospective data, window blending allows faster preliminary interpretation of axial chest CT performed for trauma, with no significant difference in diagnostic performance compared to conventional window settings. Future studies would be required to assess the utility of window blending in clinical practice. Copyright © 2018 The Association of University Radiologists. All rights reserved.

  2. Computerized tomography magnified bone windows are superior to standard soft tissue windows for accurate measurement of stone size: an in vitro and clinical study.

    PubMed

    Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V

    2009-04-01

    We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.

  3. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    PubMed

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  4. A Methodology for Benchmarking Relational Database Machines,

    DTIC Science & Technology

    1984-01-01

    user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey

  5. Benchmarking and the laboratory

    PubMed Central

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  6. Context Switching with Multiple Register Windows: A RISC Performance Study

    NASA Technical Reports Server (NTRS)

    Konsek, Marion B.; Reed, Daniel A.; Watcharawittayakul, Wittaya

    1987-01-01

    Although previous studies have shown that a large file of overlapping register windows can greatly reduce procedure call/return overhead, the effects of register windows in a multiprogramming environment are poorly understood. This paper investigates the performance of multiprogrammed, reduced instruction set computers (RISCs) as a function of window management strategy. Using an analytic model that reflects context switch and procedure call overheads, we analyze the performance of simple, linearly self-recursive programs. For more complex programs, we present the results of a simulation study. These studies show that a simple strategy that saves all windows prior to a context switch, but restores only a single window following a context switch, performs near optimally.

  7. A pipeline of spatio-temporal filtering for predicting the laterality of self-initiated fine movements from single trial readiness potentials.

    PubMed

    Zeid, Elias Abou; Sereshkeh, Alborz Rezazadeh; Chau, Tom

    2016-12-01

    In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.

  8. A pipeline of spatio-temporal filtering for predicting the laterality of self-initiated fine movements from single trial readiness potentials

    NASA Astrophysics Data System (ADS)

    Abou Zeid, Elias; Rezazadeh Sereshkeh, Alborz; Chau, Tom

    2016-12-01

    Objective. In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. Approach. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. Main results. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Significance. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.

  9. The impact of a scheduling change on ninth grade high school performance on biology benchmark exams and the California Standards Test

    NASA Astrophysics Data System (ADS)

    Leonardi, Marcelo

    The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.

  10. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  11. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  12. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Benchmarking heart rate variability toolboxes.

    PubMed

    Vest, Adriana N; Li, Qiao; Liu, Chengyu; Nemati, Shamim; Shah, Amit; Clifford, Gari D

    Heart rate variability (HRV) metrics hold promise as potential indicators for autonomic function, prediction of adverse cardiovascular outcomes, psychophysiological status, and general wellness. Although the investigation of HRV has been prevalent for several decades, the methods used for preprocessing, windowing, and choosing appropriate parameters lack consensus among academic and clinical investigators. A comprehensive and open-source modular program is presented for calculating HRV implemented in Matlab with evidence-based algorithms and output formats. We compare our software with another widely used HRV toolbox written in C and available through PhysioNet.org. Our findings show substantially similar results when using high quality electrocardiograms (ECG) free from arrhythmias. Our software shows equivalent performance alongside an established predecessor and includes validated tools for performing preprocessing, signal quality, and arrhythmia detection to help provide standardization and repeatability in the field, leading to fewer errors in the presence of noise or arrhythmias. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Super Energy Efficiency Design (S.E.E.D.) Home Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    German, A.; Dakin, B.; Backman, C.

    This report describes the results of evaluation by the Alliance for Residential Building Innovation (ARBI) Building America team of the 'Super Energy Efficient Design' (S.E.E.D) home, a 1,935 sq. ft., single-story spec home located in Tucson, AZ. This prototype design was developed with the goal of providing an exceptionally energy efficient yet affordable home and includes numerous aggressive energy features intended to significantly reduce heating and cooling loads such as structural insulated panel (SIP) walls and roof, high performance windows, an ERV, an air-to-water heat pump with mixed-mode radiant and forced air delivery, solar water heating, and rooftop PV. Sourcemore » energy savings are estimated at 45% over the Building America B10 Benchmark. System commissioning, short term testing, long term monitoring and detailed analysis of results was conducted to identify the performance attributes and cost effectiveness of the whole house measure package.« less

  15. Super Energy Efficient Design (S.E.E.D.) Home Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    German, A.; Dakin, B.; Backman, C.

    This report describes the results of evaluation by the Alliance for Residential Building Innovation (ARBI) Building America team of the “Super Energy Efficient Design” (S.E.E.D) home, a 1,935 sq. ft., single-story spec home located in Tucson, AZ. This prototype design was developed with the goal of providing an exceptionally energy efficient yet affordable home and includes numerous aggressive energy features intended to significantly reduce heating and cooling loads such as structural insulated panel (SIP) walls and roof, high performance windows, an ERV, an air-to-water heat pump with mixed-mode radiant and forced air delivery, solar water heating, and rooftop PV. Sourcemore » energy savings are estimated at 45% over the Building America B10 Benchmark. System commissioning, short term testing, long term monitoring and detailed analysis of results was conducted to identify the performance attributes and cost effectiveness of the whole house measure package.« less

  16. A Competitive Benchmarking Study of Noncredit Program Administration.

    ERIC Educational Resources Information Center

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  17. The Learning Organisation: Results of a Benchmarking Study.

    ERIC Educational Resources Information Center

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  18. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  19. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    PubMed

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  20. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients

    PubMed Central

    Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.

    2016-01-01

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911

  1. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  2. Theoretical vibro-acoustic modeling of acoustic noise transmission through aircraft windows

    NASA Astrophysics Data System (ADS)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-06-01

    In this paper, a fully vibro-acoustic model for sound transmission across a multi-pane aircraft window is developed. The proposed model is efficiently applied for a set of window models to perform extensive theoretical parametric studies. The studied window configurations generally simulate the passenger window designs of modern aircraft classes which have an exterior multi-Plexiglas pane, an interior single acrylic glass pane and a dimmable glass ("smart" glass), all separated by thin air cavities. The sound transmission loss (STL) characteristics of three different models, triple-, quadruple- and quintuple-paned windows identical in size and surface density, are analyzed for improving the acoustic insulation performances. Typical results describing the influence of several system parameters, such as the thicknesses, number and spacing of the window panes, on the transmission loss are then investigated. In addition, a comparison study is carried out to evaluate the acoustic reduction capability of each window model. The STL results show that the higher frequencies sound transmission loss performance can be improved by increasing the number of window panels, however, the low frequency performance is decreased, particularly at the mass-spring resonances.

  3. Measured Rattle Threshold of Residential House Windows

    NASA Technical Reports Server (NTRS)

    Sizov, Natalia; Schultz, Troy; Hobbs, Christopher; Klos, Jacob

    2008-01-01

    Window rattle is a common indoor noise effect in houses exposed to low frequency noise from such sources as railroads, blast noise and sonic boom. Human perception of rattle can be negative that is a motivating factor of the current research effort to study sonic boom induced window rattle. A rattle study has been conducted on residential houses containing windows of different construction at a variety of geographic locations within the United States. Windows in these houses were excited by a portable, high-powered loudspeaker and enclosure specifically designed to be mounted on the house exterior to cover an entire window. Window vibration was measured with accelerometers placed on different window components. Reference microphones were also placed inside the house and inside of the loudspeaker box. Swept sine excitation was used to identify the vibration threshold at which the response of the structure becomes non-linear and begins to rattle. Initial results from this study are presented and discussed. Future efforts will continue to explore the rattle occurrence in windows of residential houses exposed to sonic booms.

  4. Unusually High Incidences of Staphylococcus aureus Infection within Studies of Ventilator Associated Pneumonia Prevention Using Topical Antibiotics: Benchmarking the Evidence Base

    PubMed Central

    2018-01-01

    Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363

  5. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  6. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  7. Electric-Drive Vehicle Thermal Performance Benchmarking | Transportation

    Science.gov Websites

    studies are as follows: Characterize the thermal resistance and conductivity of various layers in the Research | NREL Electric-Drive Vehicle Thermal Performance Benchmarking Electric-Drive Vehicle Thermal Performance Benchmarking A photo of the internal components of an automotive inverter. NREL

  8. Applied anatomy of round window and adjacent structures of tympanum related to cochlear implantation.

    PubMed

    Jain, Shraddha; Gaurkar, Sagar; Deshmukh, Prasad T; Khatri, Mohnish; Kalambe, Sanika; Lakhotia, Pooja; Chandravanshi, Deepshikha; Disawal, Ashish

    2018-04-19

    Various aspects of the round window anatomy and anatomy of posterior tympanum have relevant implications for designing cochlear implant electrodes and visualizing the round window through facial recess. Preoperative information about possible anatomical variations of the round window and its relationships to the adjacent neurovascular structures can help reduce complications in cochlear implant surgery. The present study was undertaken to assess the common variations in round window anatomy and the relationships to structures of the tympanum that may be relevant for cochlear implant surgery. Thirty-five normal wet human cadaveric temporal bones were studied by dissection for anatomy of round window and its relation to facial nerve, carotid canal, jugular fossa and other structures of posterior tympanum. The dissected bones were photographed by a digital camera of 18 megapixels, which were then imported to a computer to determine various parameters using ScopyDoc 8.0.0.22 version software, after proper calibration and at 1× magnification. When the round window niche is placed posteriorly and inferiorly, the distance between round window and vertical facial nerve decreases, whereas that with horizontal facial nerve increases. In such cases, the distance between oval window and round window also increases. Maximum height of the round window in our study ranged from 0.51-1.27mm (mean of 0.69±0.25mm). Maximum width of round window ranged from 0.51 to 2.04mm (mean of 1.16±0.47mm). Average minimum distance between round window and carotid canal was 3.71±0.88mm (range of 2.79-5.34mm) and that between round window and jugular fossa was 2.47±0.9mm (range of 1.24-4.3mm). The distances from the round window to the oval window and facial nerve are important parameters in identifying a difficult round window niche. Modification of the electrode may be a better option than drilling off the round window margins for insertion of cochlear implant electrodes. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  9. A Standard-Setting Study to Establish College Success Criteria to Inform the SAT® College and Career Readiness Benchmark. Research Report 2012-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Patterson, Brian F.; Wiley, Andrew; Mattern, Krista D.

    2012-01-01

    In 2011, the College Board released its SAT college and career readiness benchmark, which represents the level of academic preparedness associated with a high likelihood of college success and completion. The goal of this study, which was conducted in 2008, was to establish college success criteria to inform the development of the benchmark. The…

  10. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Fourth near-infrared optical window for assessment of bone and other tissues

    NASA Astrophysics Data System (ADS)

    Sordillo, Diana C.; Sordillo, Laura A.; Sordillo, Peter P.; Alfano, Robert R.

    2016-02-01

    Recently, additional near-infrared (NIR) optical windows beyond the conventional first therapeutic window have been utilized for deep tissue imaging through scattering media. Biomedical applications using a second optical window (1100 to 1300 nm) and a third (1600 to 1870 nm) are emerging. A fourth window (2100 to 2300 nm) has been largely ignored due to high water absorption and a lack of high sensitivity imaging detectors and ultrafast laser sources. In this study, optical properties of bone in this fourth NIR optical window, were investigated. Results were compared to those seen at the first, second and third windows, and are consistent with our previous work on malignant and benign breast and prostate tissues. Bone and malignant tissues showed highest uptake in the third and fourth windows. As collagen is a major chromophore with prominent spectral peaks between 2100 and 2300 nm, it may be that the fourth optical window is particularly useful for studying tissues with a higher collagen content, such as bone or malignant tumors.

  12. Sound transmission loss of windows on high speed trains

    NASA Astrophysics Data System (ADS)

    Zhang, Yumei; Xiao, Xinbiao; Thompson, David; Squicciarini, Giacomo; Wen, Zefeng; Li, Zhihui; Wu, Yue

    2016-09-01

    The window is one of the main components of the high speed train car body structure through which noise can be transmitted. To study the windows’ acoustic properties, the vibration of one window of a high speed train has been measured for a running speed of 250 km/h. The corresponding interior noise and the noise in the wheel-rail area have been measured simultaneously. The experimental results show that the window vibration velocity has a similar spectral shape to the interior noise. Interior noise source identification further indicates that the window makes a contribution to the interior noise. Improvement of the window's Sound Transmission Loss (STL) can reduce the interior noise from this transmission path. An STL model of the window is built based on wave propagation and modal superposition methods. From the theoretical results, the window's STL property is studied and several factors affecting it are investigated, which provide indications for future low noise design of high speed train windows.

  13. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  14. Can data-driven benchmarks be used to set the goals of healthy people 2010?

    PubMed Central

    Allison, J; Kiefe, C I; Weissman, N W

    1999-01-01

    OBJECTIVES: Expert panels determined the public health goals of Healthy People 2000 subjectively. The present study examined whether data-driven benchmarks provide a better alternative. METHODS: We developed the "pared-mean" method to define from data the best achievable health care practices. We calculated the pared-mean benchmark for screening mammography from the 1994 National Health Interview Survey, using the metropolitan statistical area as the "provider" unit. Beginning with the best-performing provider and adding providers in descending sequence, we established the minimum provider subset that included at least 10% of all women surveyed on this question. The pared-mean benchmark is then the proportion of women in this subset who received mammography. RESULTS: The pared-mean benchmark for screening mammography was 71%, compared with the Healthy People 2000 goal of 60%. CONCLUSIONS: For Healthy People 2010, benchmarks derived from data reflecting the best available care provide viable alternatives to consensus-derived targets. We are currently pursuing additional refinements to the data-driven pared-mean benchmark approach. PMID:9987466

  15. Issues in Institutional Benchmarking of Student Learning Outcomes Using Case Examples

    ERIC Educational Resources Information Center

    Judd, Thomas P.; Pondish, Christopher; Secolsky, Charles

    2013-01-01

    Benchmarking is a process that can take place at both the inter-institutional and intra-institutional level. This paper focuses on benchmarking intra-institutional student learning outcomes using case examples. The findings of the study illustrate the point that when the outcomes statements associated with the mission of the institution are…

  16. Benchmarking in TESOL: A Study of the Malaysia Education Blueprint 2013

    ERIC Educational Resources Information Center

    Jawaid, Arif

    2014-01-01

    Benchmarking is a very common real-life function occurring every moment unnoticed. It has travelled from industry to education like other quality disciplines. Initially benchmarking was used in higher education. .Now it is diffusing into other areas including TESOL (Teaching English to Speakers of Other Languages), which has yet to devise a…

  17. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  18. Benchmark Factors in Student Retention.

    ERIC Educational Resources Information Center

    Waggener, Anna T.; Smith, Constance K.

    The first purpose of this study was to identify significant factors affecting the first benchmark in retaining students in college--the decision to enroll in the first fall semester after orientation. The second purpose was to examine enrollment decisions at the second benchmark--the decision to re-enroll in the second fall semester after freshman…

  19. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  20. Benchmarks for effective primary care-based nursing services for adults with depression: a Delphi study.

    PubMed

    McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek

    2010-02-01

    This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.

  1. Evaluation of the influence of the definition of an isolated hip fracture as an exclusion criterion for trauma system benchmarking: a multicenter cohort study.

    PubMed

    Tiao, J; Moore, L; Porgo, T V; Belcaid, A

    2016-06-01

    To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.

  2. Paradoxical Acinetobacter-associated ventilator-associated pneumonia incidence rates within prevention studies using respiratory tract applications of topical polymyxin: benchmarking the evidence base.

    PubMed

    Hurley, J C

    2018-04-10

    Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  3. Proposed biopsy performance benchmarks for MRI based on an audit of a large academic center.

    PubMed

    Sedora Román, Neda I; Mehta, Tejas S; Sharpe, Richard E; Slanetz, Priscilla J; Venkataraman, Shambhavi; Fein-Zachary, Valerie; Dialani, Vandana

    2018-05-01

    Performance benchmarks exist for mammography (MG); however, performance benchmarks for magnetic resonance imaging (MRI) are not yet fully developed. The purpose of our study was to perform an MRI audit based on established MG and screening MRI benchmarks and to review whether these benchmarks can be applied to an MRI practice. An IRB approved retrospective review of breast MRIs was performed at our center from 1/1/2011 through 12/31/13. For patients with biopsy recommendation, core biopsy and surgical pathology results were reviewed. The data were used to derive mean performance parameter values, including abnormal interpretation rate (AIR), positive predictive value (PPV), cancer detection rate (CDR), percentage of minimal cancers and axillary node negative cancers and compared with MG and screening MRI benchmarks. MRIs were also divided by screening and diagnostic indications to assess for differences in performance benchmarks amongst these two groups. Of the 2455 MRIs performed over 3-years, 1563 were performed for screening indications and 892 for diagnostic indications. With the exception of PPV2 for screening breast MRIs from 2011 to 2013, PPVs were met for our screening and diagnostic populations when compared to the MRI screening benchmarks established by the Breast Imaging Reporting and Data System (BI-RADS) 5 Atlas ® . AIR and CDR were lower for screening indications as compared to diagnostic indications. New MRI screening benchmarks can be used for screening MRI audits while the American College of Radiology (ACR) desirable goals for diagnostic MG can be used for diagnostic MRI audits. Our study corroborates established findings regarding differences in AIR and CDR amongst screening versus diagnostic indications. © 2017 Wiley Periodicals, Inc.

  4. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    PubMed

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  5. VarDetect: a nucleotide sequence variation exploratory tool

    PubMed Central

    Ngamphiw, Chumpol; Kulawonganunchai, Supasak; Assawamakin, Anunchai; Jenwitheesuk, Ekachai; Tongsima, Sissades

    2008-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most commonly studied units of genetic variation. The discovery of such variation may help to identify causative gene mutations in monogenic diseases and SNPs associated with predisposing genes in complex diseases. Accurate detection of SNPs requires software that can correctly interpret chromatogram signals to nucleotides. Results We present VarDetect, a stand-alone nucleotide variation exploratory tool that automatically detects nucleotide variation from fluorescence based chromatogram traces. Accurate SNP base-calling is achieved using pre-calculated peak content ratios, and is enhanced by rules which account for common sequence reading artifacts. The proposed software tool is benchmarked against four other well-known SNP discovery software tools (PolyPhred, novoSNP, Genalys and Mutation Surveyor) using fluorescence based chromatograms from 15 human genes. These chromatograms were obtained from sequencing 16 two-pooled DNA samples; a total of 32 individual DNA samples. In this comparison of automatic SNP detection tools, VarDetect achieved the highest detection efficiency. Availability VarDetect is compatible with most major operating systems such as Microsoft Windows, Linux, and Mac OSX. The current version of VarDetect is freely available at . PMID:19091032

  6. An iterated local search algorithm for the team orienteering problem with variable profits

    NASA Astrophysics Data System (ADS)

    Gunawan, Aldy; Ng, Kien Ming; Kendall, Graham; Lai, Junhan

    2018-07-01

    The orienteering problem (OP) is a routing problem that has numerous applications in various domains such as logistics and tourism. The objective is to determine a subset of vertices to visit for a vehicle so that the total collected score is maximized and a given time budget is not exceeded. The extensive application of the OP has led to many different variants, including the team orienteering problem (TOP) and the team orienteering problem with time windows. The TOP extends the OP by considering multiple vehicles. In this article, the team orienteering problem with variable profits (TOPVP) is studied. The main characteristic of the TOPVP is that the amount of score collected from a visited vertex depends on the duration of stay on that vertex. A mathematical programming model for the TOPVP is first presented and an algorithm based on iterated local search (ILS) that is able to solve modified benchmark instances is then proposed. It is concluded that ILS produces solutions which are comparable to those obtained by the commercial solver CPLEX for smaller instances. For the larger instances, ILS obtains good-quality solutions that have significantly better objective value than those found by CPLEX under reasonable computational times.

  7. Low-voltage high-speed programming gate-all-around floating gate memory cell with tunnel barrier engineering

    NASA Astrophysics Data System (ADS)

    Hamzah, Afiq; Ezaila Alias, N.; Ismail, Razali

    2018-06-01

    The aim of this study is to investigate the memory performances of gate-all-around floating gate (GAA-FG) memory cell implementing engineered tunnel barrier concept of variable oxide thickness (VARIOT) of low-k/high-k for several high-k (i.e., Si3N4, Al2O3, HfO2, and ZrO2) with low-k SiO2 using three-dimensional (3D) simulator Silvaco ATLAS. The simulation work is conducted by initially determining the optimized thickness of low-k/high-k barrier-stacked and extracting their Fowler–Nordheim (FN) coefficients. Based on the optimized parameters the device performances of GAA-FG for fast program operation and data retention are assessed using benchmark set by 6 and 8 nm SiO2 tunnel layer respectively. The programming speed has been improved and wide memory window with 30% increment from conventional SiO2 has been obtained using SiO2/Al2O3 tunnel layer due to its thin low-k dielectric thickness. Furthermore, given its high band edges only 1% of charge-loss is expected after 10 years of ‑3.6/3.6 V gate stress.

  8. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  9. A benchmarking program to reduce red blood cell outdating: implementation, evaluation, and a conceptual framework.

    PubMed

    Barty, Rebecca L; Gagliardi, Kathleen; Owens, Wendy; Lauzon, Deborah; Scheuermann, Sheena; Liu, Yang; Wang, Grace; Pai, Menaka; Heddle, Nancy M

    2015-07-01

    Benchmarking is a quality improvement tool that compares an organization's performance to that of its peers for selected indicators, to improve practice. Processes to develop evidence-based benchmarks for red blood cell (RBC) outdating in Ontario hospitals, based on RBC hospital disposition data from Canadian Blood Services, have been previously reported. These benchmarks were implemented in 160 hospitals provincewide with a multifaceted approach, which included hospital education, inventory management tools and resources, summaries of best practice recommendations, recognition of high-performing sites, and audit tools on the Transfusion Ontario website (http://transfusionontario.org). In this study we describe the implementation process and the impact of the benchmarking program on RBC outdating. A conceptual framework for continuous quality improvement of a benchmarking program was also developed. The RBC outdating rate for all hospitals trended downward continuously from April 2006 to February 2012, irrespective of hospitals' transfusion rates or their distance from the blood supplier. The highest annual outdating rate was 2.82%, at the beginning of the observation period. Each year brought further reductions, with a nadir outdating rate of 1.02% achieved in 2011. The key elements of the successful benchmarking strategy included dynamic targets, a comprehensive and evidence-based implementation strategy, ongoing information sharing, and a robust data system to track information. The Ontario benchmarking program for RBC outdating resulted in continuous and sustained quality improvement. Our conceptual iterative framework for benchmarking provides a guide for institutions implementing a benchmarking program. © 2015 AABB.

  10. Benchmarking can add up for healthcare accounting.

    PubMed

    Czarnecki, M T

    1994-09-01

    In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.

  11. Learning from Follow Up Surveys of Graduates: The Austin Teacher Program and the Benchmark Project. A Discussion Paper.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes Austin College's (Texas) participation in the Benchmark Project, a collaborative followup study of teacher education graduates and their principals, focusing on the second round of data collection. The Benchmark Project was a collaboration of 11 teacher preparation programs that gathered and analyzed data comparing graduates…

  12. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    ERIC Educational Resources Information Center

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  13. Teachers' Perceptions of the Effectiveness of Benchmark Assessment Data to Predict Student Math Grades

    ERIC Educational Resources Information Center

    Lewis, Lawanna M.

    2010-01-01

    The purpose of this correlational quantitative study was to examine the extent to which teachers perceive the use of benchmark assessment data as effective; the extent to which the time spent teaching mathematics is associated with students' mathematics grades, and the extent to which the results of math benchmark assessment influence teachers'…

  14. Mean-Eddy-Turbulence Interaction through Canonical Transfer Analysis: Theory and Application to the Kuroshio Extension Energetics Study

    NASA Astrophysics Data System (ADS)

    Liang, X. S.

    2016-02-01

    Central at the processes of mean-eddy-turbulence interaction, e.g., mesoscale eddy shedding, relaminarization, etc., is the transfer of energy among different scales. The existing classical transfers, however, do not take into account the issue of energy conservation and, therefore, are not faithful representations of the real interaction processes, which are fundamentally a redistribution of energy among scales. Based on a new analysis machinery, namely, multiscale window transform (Liang and Anderson, 2007), we were able to obtain a formula for this important processes, with the property of energy conservation a naturally embedded property. This formula has a form reminiscent of the Poisson bracket in Hamiltonian dynamics. It has been validated with many benchmark processes, and, particularly, has been applied with success to control the eddy shedding behind a bluff body. Presented here will be an application study of the instabilities and mean-eddy interactions in the Kuroshio Extension (KE) region. Generally, it is found that the unstable KE jet fuels the mesoscale eddies, but in the offshore eddy decaying region, the cause-effect relation reverses: it is the latter that drive the former. On the whole the eddies act to decelerate the jet in the upstream, whereas accelerating it downstream.

  15. Mechanically durable carbon nanotube-composite hierarchical structures with superhydrophobicity, self-cleaning, and low-drag.

    PubMed

    Jung, Yong Chae; Bhushan, Bharat

    2009-12-22

    Superhydrophobic surfaces with high contact angle and low contact angle hysteresis exhibit a self-cleaning effect and low drag for fluid flow. The lotus (Nelumbo nucifera) leaf is one of the examples found in nature for superhydrophobic surfaces. For the development of superhydrophobic surfaces, which is important for various applications such as glass windows, solar panels, and microchannels, materials and fabrication methods need to be explored to provide mechanically durable surfaces. It is necessary to perform durability studies on these surfaces. Carbon nanotube (CNT), composite structures which would lead to superhydrophobicity, self-cleaning, and low-drag, were prepared using a spray method. As a benchmark, structured surfaces with lotus wax were also prepared to compare with the durability of CNT composite structures. To compare the durability of the various fabricated surfaces, waterfall/jet tests were conducted to determine the loss of superhydrophobicity by changing the flow time and pressure conditions. Wear and friction studies were also performed using an atomic force microscope (AFM) and a ball-on-flat tribometer. The changes in the morphology of the structured surfaces were examined by AFM and optical imaging. We find that superhydrophobic CNT composite structures showed good mechanical durability, superior to the structured surfaces with lotus wax, and may be suitable for real world applications.

  16. An extension to artifact-free projection overlaps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jianyu, E-mail: jianyulin@hotmail.com

    2015-05-15

    Purpose: In multipinhole single photon emission computed tomography, the overlapping of projections has been used to increase sensitivity. Avoiding artifacts in the reconstructed image associated with projection overlaps (multiplexing) is a critical issue. In our previous report, two types of artifact-free projection overlaps, i.e., projection overlaps that do not lead to artifacts in the reconstructed image, were formally defined and proved, and were validated via simulations. In this work, a new proposition is introduced to extend the previously defined type-II artifact-free projection overlaps so that a broader range of artifact-free overlaps is accommodated. One practical purpose of the new extensionmore » is to design a baffle window multipinhole system with artifact-free projection overlaps. Methods: First, the extended type-II artifact-free overlap was theoretically defined and proved. The new proposition accommodates the situation where the extended type-II artifact-free projection overlaps can be produced with incorrectly reconstructed portions in the reconstructed image. Next, to validate the theory, the extended-type-II artifact-free overlaps were employed in designing the multiplexing multipinhole spiral orbit imaging systems with a baffle window. Numerical validations were performed via simulations, where the corresponding 1-pinhole nonmultiplexing reconstruction results were used as the benchmark for artifact-free reconstructions. The mean square error (MSE) was the metric used for comparisons of noise-free reconstructed images. Noisy reconstructions were also performed as part of the validations. Results: Simulation results show that for noise-free reconstructions, the MSEs of the reconstructed images of the artifact-free multiplexing systems are very similar to those of the corresponding 1-pinhole systems. No artifacts were observed in the reconstructed images. Therefore, the testing results for artifact-free multiplexing systems designed using the extended type-II artifact-free overlaps numerically validated the developed theory. Conclusions: First, the extension itself is of theoretical importance because it broadens the selection range for optimizing multiplexing multipinhole designs. Second, the extension has an immediate application: using a baffle window to design a special spiral orbit multipinhole imaging system with projection overlaps in the orbit axial direction. Such an artifact-free baffle window design makes it possible for us to image any axial portion of interest of a long object with projection overlaps to increase sensitivity.« less

  17. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  18. Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.

    PubMed

    Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian

    2017-03-01

    One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.

  19. COMPETITIVE BIDDING IN MEDICARE ADVANTAGE: EFFECT OF BENCHMARK CHANGES ON PLAN BIDS

    PubMed Central

    Song, Zirui; Landrum, Mary Beth; Chernew, Michael E.

    2013-01-01

    Bidding has been proposed to replace or complement the administered prices in Medicare pays to hospitals and health plans. In 2006, the Medicare Advantage program implemented a competitive bidding system to determine plan payments. In perfectly competitive models, plans bid their costs and thus bids are insensitive to the benchmark. Under many other models of competition, bids respond to changes in the benchmark. We conceptualize the bidding system and use an instrumental variable approach to study the effect of benchmark changes on bids. We use 2006–2010 plan payment data from the Centers for Medicare and Medicaid Services, published county benchmarks, actual realized fee-for-service costs, and Medicare Advantage enrollment. We find that a $1 increase in the benchmark leads to about a $0.53 increase in bids, suggesting that plans in the Medicare Advantage market have meaningful market power. PMID:24308881

  20. Competitive bidding in Medicare Advantage: effect of benchmark changes on plan bids.

    PubMed

    Song, Zirui; Landrum, Mary Beth; Chernew, Michael E

    2013-12-01

    Bidding has been proposed to replace or complement the administered prices that Medicare pays to hospitals and health plans. In 2006, the Medicare Advantage program implemented a competitive bidding system to determine plan payments. In perfectly competitive models, plans bid their costs and thus bids are insensitive to the benchmark. Under many other models of competition, bids respond to changes in the benchmark. We conceptualize the bidding system and use an instrumental variable approach to study the effect of benchmark changes on bids. We use 2006-2010 plan payment data from the Centers for Medicare and Medicaid Services, published county benchmarks, actual realized fee-for-service costs, and Medicare Advantage enrollment. We find that a $1 increase in the benchmark leads to about a $0.53 increase in bids, suggesting that plans in the Medicare Advantage market have meaningful market power. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Groundwater-quality data in the North San Francisco Bay Shallow Aquifer study unit, 2012: results from the California GAMA Program

    USGS Publications Warehouse

    Bennett, George L.; Fram, Miranda S.

    2014-01-01

    Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in 13 grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in two grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in two grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 15 grid wells, and concentrations in 4 of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  2. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  3. Evaluating the Effect of Labeled Benchmarks on Children’s Number Line Estimation Performance and Strategy Use

    PubMed Central

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302

  4. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.

  5. A Numerical Study of the Thermal Characteristics of an Air Cavity Formed by Window Sashes in a Double Window

    NASA Astrophysics Data System (ADS)

    Kang, Jae-sik; Oh, Eun-Joo; Bae, Min-Jung; Song, Doo-Sam

    2017-12-01

    Given that the Korean government is implementing what has been termed the energy standards and labelling program for windows, window companies will be required to assign window ratings based on the experimental results of their product. Because this has added to the cost and time required for laboratory tests by window companies, the simulation system for the thermal performance of windows has been prepared to compensate for time and cost burdens. In Korea, a simulator is usually used to calculate the thermal performance of a window through WINDOW/THERM, complying with ISO 15099. For a single window, the simulation results are similar to experimental results. A double window is also calculated using the same method, but the calculation results for this type of window are unreliable. ISO 15099 should not recommend the calculation of the thermal properties of an air cavity between window sashes in a double window. This causes a difference between simulation and experimental results pertaining to the thermal performance of a double window. In this paper, the thermal properties of air cavities between window sashes in a double window are analyzed through computational fluid dynamics (CFD) simulations with the results compared to calculation results certified by ISO 15099. The surface temperature of the air cavity analyzed by CFD is compared to the experimental temperatures. These results show that an appropriate calculation method for an air cavity between window sashes in a double window should be established for reliable thermal performance results for a double window.

  6. Effectiveness of Social Marketing Interventions to Promote Physical Activity Among Adults: A Systematic Review.

    PubMed

    Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius

    2016-11-01

    Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.

  7. Is Latency to Test Deadline a Predictor of Student Test Performance?

    ERIC Educational Resources Information Center

    Landrum, R. Eric; Gurung, Regan A. R.

    2013-01-01

    When students are given a period or window of time to take an exam, is taking an exam earlier in the window (high latency to deadline) related to test scores? In Study 1, students (n = 236) were given windows of time to take online each of 13 quizzes and 4 exams. In Study 2, students (n = 251) similarly took 4 exams online within a test window. In…

  8. Piloting a Process Maturity Model as an e-Learning Benchmarking Method

    ERIC Educational Resources Information Center

    Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim

    2007-01-01

    As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…

  9. Student Satisfaction Surveys: The Value in Taking an Historical Perspective

    ERIC Educational Resources Information Center

    Kane, David; Williams, James; Cappuccini-Ansfield, Gillian

    2008-01-01

    Benchmarking satisfaction over time can be extremely valuable where a consistent feedback cycle is employed. However, the value of benchmarking over a long period of time has not been analysed in depth. What is the value of benchmarking this type of data over time? What does it tell us about a feedback and action cycle? What impact does a study of…

  10. Benchmark study on glyphosate-resistant crop systems in the United States. Part 2: Perspectives.

    PubMed

    Owen, Micheal D K; Young, Bryan G; Shaw, David R; Wilson, Robert G; Jordan, David L; Dixon, Philip M; Weller, Stephen C

    2011-07-01

    A six-state, 5 year field project was initiated in 2006 to study weed management methods that foster the sustainability of genetically engineered (GE) glyphosate-resistant (GR) crop systems. The benchmark study field-scale experiments were initiated following a survey, conducted in the winter of 2005-2006, of farmer opinions on weed management practices and their views on GR weeds and management tactics. The main survey findings supported the premise that growers were generally less aware of the significance of evolved herbicide resistance and did not have a high recognition of the strong selection pressure from herbicides on the evolution of herbicide-resistant (HR) weeds. The results of the benchmark study survey indicated that there are educational challenges to implement sustainable GR-based crop systems and helped guide the development of the field-scale benchmark study. Paramount is the need to develop consistent and clearly articulated science-based management recommendations that enable farmers to reduce the potential for HR weeds. This paper provides background perspectives about the use of GR crops, the impact of these crops and an overview of different opinions about the use of GR crops on agriculture and society, as well as defining how the benchmark study will address these issues. Copyright © 2011 Society of Chemical Industry.

  11. Benchmarking CRISPR on-target sgRNA design.

    PubMed

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  13. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  14. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  15. Rapidity window dependences of higher order cumulants and diffusion master equation

    NASA Astrophysics Data System (ADS)

    Kitazawa, Masakiyo

    2015-10-01

    We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.

  16. An Analysis of Peer-Reviewed Scores and Impact Factors with Different Citation Time Windows: A Case Study of 28 Ophthalmologic Journals

    PubMed Central

    Liu, Xue-Li; Gai, Shuang-Shuang; Zhang, Shi-Le; Wang, Pu

    2015-01-01

    Background An important attribute of the traditional impact factor was the controversial 2-year citation window. So far, several scholars have proposed using different citation time windows for evaluating journals. However, there is no confirmation whether a longer citation time window would be better. How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows with the peer-reviewed scores of ophthalmologic journals indexed by Science Citation Index Expanded (SCIE) database. Methods The peer-reviewed scores of 28 ophthalmologic journals were obtained through a self-designed survey questionnaire. Impact factors with different citation time windows (including 2IF, 3IF, 4IF, 5IF, and 6IF) of 28 ophthalmologic journals were computed and compared in accordance with each impact factor’s definition and formula, using the citation analysis function of the Web of Science (WoS) database. An analysis of the correlation between impact factors with different citation time windows and peer-reviewed scores was carried out. Results Although impact factor values with different citation time windows were different, there was a high level of correlation between them when it came to evaluating journals. In the current study, for ophthalmologic journals’ impact factors with different time windows in 2013, 3IF and 4IF seemed the ideal ranges for comparison, when assessed in relation to peer-reviewed scores. In addition, the 3-year and 4-year windows were quite consistent with the cited peak age of documents published by ophthalmologic journals. Research Limitations Our study is based on ophthalmology journals and we only analyze the impact factors with different citation time window in 2013, so it has yet to be ascertained whether other disciplines (especially those with a later cited peak) or other years would follow the same or similar patterns. Originality/ Value We designed the survey questionnaire ourselves, specifically to assess the real influence of journals. We used peer-reviewed scores to judge the journal evaluation effect of impact factors with different citation time windows. The main purpose of this study was to help researchers better understand the role of impact factors with different citation time windows in journal evaluation. PMID:26295157

  17. An Analysis of Peer-Reviewed Scores and Impact Factors with Different Citation Time Windows: A Case Study of 28 Ophthalmologic Journals.

    PubMed

    Liu, Xue-Li; Gai, Shuang-Shuang; Zhang, Shi-Le; Wang, Pu

    2015-01-01

    An important attribute of the traditional impact factor was the controversial 2-year citation window. So far, several scholars have proposed using different citation time windows for evaluating journals. However, there is no confirmation whether a longer citation time window would be better. How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows with the peer-reviewed scores of ophthalmologic journals indexed by Science Citation Index Expanded (SCIE) database. The peer-reviewed scores of 28 ophthalmologic journals were obtained through a self-designed survey questionnaire. Impact factors with different citation time windows (including 2IF, 3IF, 4IF, 5IF, and 6IF) of 28 ophthalmologic journals were computed and compared in accordance with each impact factor's definition and formula, using the citation analysis function of the Web of Science (WoS) database. An analysis of the correlation between impact factors with different citation time windows and peer-reviewed scores was carried out. Although impact factor values with different citation time windows were different, there was a high level of correlation between them when it came to evaluating journals. In the current study, for ophthalmologic journals' impact factors with different time windows in 2013, 3IF and 4IF seemed the ideal ranges for comparison, when assessed in relation to peer-reviewed scores. In addition, the 3-year and 4-year windows were quite consistent with the cited peak age of documents published by ophthalmologic journals. Our study is based on ophthalmology journals and we only analyze the impact factors with different citation time window in 2013, so it has yet to be ascertained whether other disciplines (especially those with a later cited peak) or other years would follow the same or similar patterns. We designed the survey questionnaire ourselves, specifically to assess the real influence of journals. We used peer-reviewed scores to judge the journal evaluation effect of impact factors with different citation time windows. The main purpose of this study was to help researchers better understand the role of impact factors with different citation time windows in journal evaluation.

  18. Developing Starlight connections with UNESCO sites through the Biosphere Smart

    NASA Astrophysics Data System (ADS)

    Marin, Cipriano

    2015-08-01

    The large number of UNESCO Sites around the world, in outstanding sites ranging from small islands to cities, makes it possible to build and share a comprehensive knowledge base on good practices and policies on the preservation of the night skies consistent with the protection of the associated scientific, natural and cultural values. In this context, the Starlight Initiative and other organizations such as IDA play a catalytic role in an essential international process to promote comprehensive, holistic approaches on dark sky preservation, astronomical observation, environmental protection, responsible lighting, sustainable energy, climate change and global sustainability.Many of these places have the potential to become models of excellence to foster the recovery of the dark skies and its defence against light pollution, included some case studies mentioned in the Portal to the Heritage of Astronomy.Fighting light pollution and recovering starry sky are already elements of a new emerging culture in biosphere reserves and world heritage sites committed to acting on climate change and sustainable development. Over thirty territories, including biosphere reserves and world heritage sites, have been developed successful initiatives to ensure night sky quality and promote sustainable lighting. Clear night skies also provide sustainable income opportunities as tourists and visitors are eagerly looking for sites with impressive night skies.Taking into account the high visibility and the ability of UNESCO sites to replicate network experiences, the Starlight Initiative has launched an action In cooperation with Biosphere Smart, aimed at promoting the Benchmark sites.Biosphere Smart is a global observatory created in partnership with UNESCO MaB Programme to share good practices, and experiences among UNESCO sites. The Benchmark sites window allows access to all the information of the most relevant astronomical heritage sites, dark sky protected areas and other places committed to the preservation of the values associated with the night sky. A new step ahead in our common task of protecting the starry skies at UNESCO sites.

  19. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  20. Simulations of the propagation of multiple-FM smoothing by spectral dispersion on OMEGA EP

    DOE PAGES

    Kelly, J. H.; Shvydky, A.; Marozas, J. A.; ...

    2013-02-18

    A one-dimensional (1-D) smoothing by spectral dispersion (SSD) system for smoothing focal-spot nonuniformities using multiple modulation frequencies has been commissioned on one long-pulse beamline of OMEGA EP, the first use of such a system in a high-energy laser. Frequency modulation (FM) to amplitude modulation (AM) conversion in the infrared (IR) output, frequency conversion, and final optics affected the accumulation of B-integral in that beamline. Modeling of this FM-to-AM conversion using the code Miró. was used as input to set the beamline performance limits for picket (short) pulses with multi-FM SSD applied. This article first describes that modeling. The 1-D SSDmore » analytical model of Chuang is first extended to the case of multiple modulators and then used to benchmark Miró simulations. Comparison is also made to an alternative analytic model developed by Hocquet et al. With the confidence engendered by this benchmarking, Miró results for multi-FM SSD applied on OMEGA EP are then presented. The relevant output section(s) of the OMEGA EP Laser System are described. The additional B-integral in OMEGA EP IR components upstream of the frequency converters due to AM is modeled. The importance of locating the image of the SSD dispersion grating at the frequency converters is demonstrated. In conclusion, since frequency conversion is not performed in OMEGA EP’s target chamber, the additional AM due to propagation to the target chamber’s vacuum window is modeled.« less

  1. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows.

    PubMed

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.

  2. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows

    PubMed Central

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575

  3. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  4. Derivation of Draft Ecological Soil Screening Levels for TNT and RDX Utilizing Terrestrial Plant and Soil Invertebrate Toxicity Benchmarks

    DTIC Science & Technology

    2012-11-01

    TSL Soils Utilizing Growth Benchmarks for Alfalfa , Barnyard Grass, and Perennial Ryegrass ............................................. 5 3...Derivation of Terrestrial Plant-Based Draft Eco-SSL Value for RDX Weathered-and-Aged in SSL or TSL Soils Utilizing Growth Benchmarks for Alfalfa ...studies were conducted using the following plant species:  Dicotyledonous symbiotic species alfalfa (Medicago sativa L.)  Monocotyledonous

  5. Computed Tomographic Window Setting for Bronchial Measurement to Guide Double-Lumen Tube Size.

    PubMed

    Seo, Jeong-Hwa; Bae, Jinyoung; Paik, Hyesun; Koo, Chang-Hoon; Bahk, Jae-Hyon

    2018-04-01

    The bronchial diameter measured on computed tomography (CT) can be used to guide double-lumen tube (DLT) sizes objectively. The bronchus is known to be measured most accurately in the so-called bronchial CT window. The authors investigated whether using the bronchial window results in the selection of more appropriately sized DLTs than using the other windows. CT image analysis and prospective randomized study. Tertiary hospital. Adults receiving left-sided DLTs. The authors simulated selection of DLT sizes based on the left bronchial diameters measured in the lung (width 1,500 Hounsfield unit [HU] and level -700 HU), bronchial (1,000 HU and -450 HU), and mediastinal (400 HU and 25 HU) CT windows. Furthermore, patients were randomly assigned to undergo imaging with either the bronchial or mediastinal window to guide DLT sizes. Using the underwater seal technique, the authors assessed whether the DLT was appropriately sized, undersized, or oversized for the patient. On 130 CT images, the bronchial diameter (9.9 ± 1.2 mm v 10.5 ± 1.3 mm v 11.7 ± 1.3 mm) and the selected DLT size were different in the lung, bronchial, and mediastinal windows, respectively (p < 0.001). In 13 patients (17%), the bronchial diameter measured in the lung window suggested too small DLTs (28 Fr) for adults. In the prospective study, oversized tubes were chosen less frequently in the bronchial window than in the mediastinal window (6/110 v 23/111; risk ratio 0.38; 95% CI 0.19-0.79; p = 0.003). No tubes were undersized after measurements in these two windows. The bronchial measurement in the bronchial window guided more appropriately sized DLTs compared with the lung or mediastinal windows. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Bayesian distributed lag interaction models to identify perinatal windows of vulnerability in children's health.

    PubMed

    Wilson, Ander; Chiu, Yueh-Hsiu Mathilda; Hsu, Hsiao-Hsien Leon; Wright, Robert O; Wright, Rosalind J; Coull, Brent A

    2017-07-01

    Epidemiological research supports an association between maternal exposure to air pollution during pregnancy and adverse children's health outcomes. Advances in exposure assessment and statistics allow for estimation of both critical windows of vulnerability and exposure effect heterogeneity. Simultaneous estimation of windows of vulnerability and effect heterogeneity can be accomplished by fitting a distributed lag model (DLM) stratified by subgroup. However, this can provide an incomplete picture of how effects vary across subgroups because it does not allow for subgroups to have the same window but different within-window effects or to have different windows but the same within-window effect. Because the timing of some developmental processes are common across subpopulations of infants while for others the timing differs across subgroups, both scenarios are important to consider when evaluating health risks of prenatal exposures. We propose a new approach that partitions the DLM into a constrained functional predictor that estimates windows of vulnerability and a scalar effect representing the within-window effect directly. The proposed method allows for heterogeneity in only the window, only the within-window effect, or both. In a simulation study we show that a model assuming a shared component across groups results in lower bias and mean squared error for the estimated windows and effects when that component is in fact constant across groups. We apply the proposed method to estimate windows of vulnerability in the association between prenatal exposures to fine particulate matter and each of birth weight and asthma incidence, and estimate how these associations vary by sex and maternal obesity status in a Boston-area prospective pre-birth cohort study. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  8. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-10-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  9. Using a visual plate waste study to monitor menu performance.

    PubMed

    Connors, Priscilla L; Rozell, Sarah B

    2004-01-01

    Two visual plate waste studies were conducted in 1-week phases over a 1-year period in an acute care hospital. A total of 383 trays were evaluated in the first phase and 467 in the second. Food items were ranked for consumption from a low (1) to high (6) score, with a score of 4.0 set as the benchmark denoting a minimum level of acceptable consumption. In the first phase two entrees, four starches, all of the vegetables, sliced white bread, and skim milk scored below the benchmark. As a result six menu items were replaced and one was modified. In the second phase all entrees scored at or above 4.0, as did seven vegetables, and a dinner roll that replaced sliced white bread. Skim milk continued to score below the benchmark. A visual plate waste study assists in benchmarking performance, planning menu changes, and assessing effectiveness.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MOSTELLER, RUSSELL D.

    Previous studies have indicated that ENDF/B-VII preliminary releases {beta}-2 and {beta}-3, predecessors to the recent initial release of ENDF/B-VII.0, produce significantly better overall agreement with criticality benchmarks than does ENDF/B-VI. However, one of those studies also suggests that improvements still may be needed for thermal plutonium cross sections. The current study substantiates that concern by examining criticality benchmarks for unreflected spheres of plutonium-nitrate solutions and for slightly and heavily borated mixed-oxide (MOX) lattices. Results are presented for the JEFF-3.1 and JENDL-3.3 nuclear data libraries as well as ENDF/B-VII.0 and ENDF/B-VI. It is shown that ENDF/B-VII.0 tends to overpredict reactivity formore » thermal plutonium benchmarks over at least a portion of the thermal range. In addition, it is found that additional benchmark data are needed for the deep thermal range.« less

  11. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  12. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    PubMed

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  13. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    PubMed Central

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  14. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  15. Difference of auditory brainstem responses by stimulating to round and oval window in animal experiments.

    PubMed

    Lee, Jyung Hyun; Park, Hyo Soon; Wei, Qun; Kim, Myoung Nam; Cho, Jin-Ho

    2017-01-02

    ABSTACT To ensure the safety and efficacy of implantable hearing aids, animal experiments are an essential developmental procedure, in particular, auditory brainstem responses (ABRs) can be used to verify the objective effectiveness of implantable hearing aids. This study measured and compared the ABRs generated when applying the same vibration stimuli to an oval window and round window. The ABRs were measured using a TDT system 3 (TDT, USA), while the vibration stimuli were applied to a round window and oval window in 4 guinea pigs using a piezo-electric transducer with a proper contact tip. A paired t-test was used to determine any differences between the ABR amplitudes when applying the stimulation to an oval window and round window. The paired t-test revealed a significant difference between the ABR amplitudes generated by the round and oval window stimulation (t = 10.079, α < .0001). Therefore, the results confirmed that the biological response to round window stimulation was not the same as that to oval window stimulation.

  16. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  17. A review on the benchmarking concept in Malaysian construction safety performance

    NASA Astrophysics Data System (ADS)

    Ishak, Nurfadzillah; Azizan, Muhammad Azizi

    2018-02-01

    Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.

  18. Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China

    NASA Astrophysics Data System (ADS)

    Zhuo, La; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2016-11-01

    Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961-2008, (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1-3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7-8 % smaller than for cold years, (v) WF benchmarks differ by about 10-12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If actual consumptive WFs of winter wheat throughout China were reduced to the benchmark levels set by the best 25 % of Chinese winter wheat production (1224 m3 t-1 for arid areas and 841 m3 t-1 for humid areas), the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China. The majority of the yield increase and associated improvement in water productivity can be achieved in southern China.

  19. Relation between financial market structure and the real economy: comparison between clustering methods.

    PubMed

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  20. Pc as Physics Computer for Lhc ?

    NASA Astrophysics Data System (ADS)

    Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.

  1. Relation between Financial Market Structure and the Real Economy: Comparison between Clustering Methods

    PubMed Central

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging. PMID:25786703

  2. Plasma diagnostics for x-ray driven foils at Z

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heeter, R F; Bailey, J E; Cuneo, M E

    We report the development of techniques to diagnose plasmas produced by X-ray photoionization of thin foils placed near the Z-pinch on the Sandia Z Machine. The development of 100+ TW X-ray sources enables access to novel plasma regimes, such as the photoionization equilibrium. To diagnose these plasmas one must simultaneously characterize both the foil and the driving pinch. The desired photoionized plasma equilibrium is only reached transiently for a 2-ns window, placing stringent requirements on diagnostic synchronization. We have adapted existing Sandia diagnostics and fielded an additional gated 3-crystal Johann spectrometer with dual lines of sight to meet these requirements.more » We present sample data from experiments in which 1 cm, 180 eV tungsten pinches photoionized foils composed of 200{angstrom} Fe and 300{angstrom} NaF co-mixed and sandwiched between 1000{angstrom} layers of Lexan (CHO), and discuss the application of this work to benchmarking astrophysical models.« less

  3. Plasma diagnostics for x-ray driven foils at Z

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heeter, R. F.; Bailey, J. E.; Cuneo, M. E.

    We report the development of techniques to diagnose plasmas produced by x-ray photoionization of thin foils placed near the Z-pinch on the Sandia Z Machine. The development of 100+ TW x-ray sources enables access to novel plasma regimes, such as the photoionization equilibrium. To diagnose these plasmas one must simultaneously characterize both the foil and the driving pinch. The desired photoionized plasma equilibrium is only reached transiently for a 2-ns window, placing stringent requirements on diagnostic synchronization. We have adapted existing Sandia diagnostics and fielded an additional gated three-crystal Johann spectrometer with dual lines of sight to meet these requirements.more » We present sample data from experiments using 1-cm, 180-eV tungsten pinches to photoionize foils made of 200 Aa Fe and 300 Aa NaF co-mixed and sandwiched between 1000 Aa layers of Lexan (C16H14O3), and discuss the application of this work to benchmarking astrophysical models.« less

  4. Key performance indicators to benchmark hospital information systems - a delphi study.

    PubMed

    Hübner-Bloder, G; Ammenwerth, E

    2009-01-01

    To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.

  5. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowenstein, J; Nguyen, H; Roll, J

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less

  6. Study of noise transmission through double wall aircraft windows

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1983-01-01

    Analytical and experimental procedures were used to predict the noise transmitted through double wall windows into the cabin of a twin-engine G/A aircraft. The analytical model was applied to optimize cabin noise through parametric variation of the structural and acoustic parameters. The parametric study includes mass addition, increase in plexiglass thickness, decrease in window size, increase in window cavity depth, depressurization of the space between the two window plates, replacement of the air cavity with a transparent viscoelastic material, change in stiffness of the plexiglass material, and different absorptive materials for the interior walls of the cabin. It was found that increasing the exterior plexiglass thickness and/or decreasing the total window size could achieve the proper amount of noise reduction for this aircraft. The total added weight to the aircraft is then about 25 lbs.

  7. Single-footprint retrievals for AIRS using a fast TwoSlab cloud-representation model and the SARTA all-sky infrared radiative transfer algorithm

    NASA Astrophysics Data System (ADS)

    DeSouza-Machado, Sergio; Larrabee Strow, L.; Tangborn, Andrew; Huang, Xianglei; Chen, Xiuhong; Liu, Xu; Wu, Wan; Yang, Qiguang

    2018-01-01

    One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR) satellite sounders use cloud-cleared radiances (CCRs) as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2-4 degrees of freedom (DOFs) of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP) models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA). The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds). From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT) which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO) cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS) and NWP thermodynamic and cloud profiles from the European Centre for Medium-Range Weather Forecasts (ECMWF) forecast model are used in this paper.

  8. Synthesis and coherent vibrational laser spectroscopy of putative molecular constituents in isoprene-derived secondary organic aerosol particles

    NASA Astrophysics Data System (ADS)

    Ebben, C. J.; Strick, B. F.; Upshur, M. A.; Chase, H. M.; Achtyl, J. L.; Thomson, R. J.; Geiger, F. M.

    2013-11-01

    SOA particle formation ranks among the least understood processes in the atmosphere, rooted in part in (a) the limited knowledge about SOA chemical composition; (b) the availability of only little concrete evidence for chemical structures; and (c) little availability of reference compounds needed for benchmarking and chemical identification in pure and homogenous form. Here, we address these challenges by synthesizing and subjecting to physical and chemical analysis putative isoprene-derived SOA particle constituents. Our surface-selective spectroscopic analysis of these compounds is followed by comparison to synthetic SOA particles prepared at the Harvard Environmental Chamber (HEC) and to authentic SOA particles collected in a tropical forest environment, namely the Amazon Basin, where isoprene oxidation by OH radicals has been reported to dominate SOA particle formation (Martin et al., 2010b; Sun et al., 2003; Hudson et al., 2008; Yasmeen et al., 2010). We focus on the epoxides and tetraols that have been proposed to be present in the SOA particles. We characterize the compounds prepared here by a variety of physical measurements and polarization-resolved vibrational sum frequency generation (SFG), paying particular attention to the phase state (condensed vs. vapor) of four epoxides and two tetraols in contact with a fused silica window. We compare the spectral responses from the tetraol and epoxide model compounds with those obtained from the natural and synthetic SOA particle samples that were collected on filter substrates and pressed against a fused silica window and discuss a possible match for the SFG response of one of the epoxides with that of the synthetic SOA particle material. We conclude our work by discussing how the approach described here will allow for the study of the SOA particle formation pathways from first- and second-generation oxidation products by effectively "fast-forwarding" through the initial reaction steps of particle nucleation via a chemically resolved approach aimed at testing the underlying chemical mechanisms of SOA particle formation.

  9. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  10. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  11. Local implementation of the Essence of Care benchmarks.

    PubMed

    Jones, Sue

    To understand clinical practice benchmarking from the perspective of nurses working in a large acute NHS trust and to determine whether the nurses perceived that their commitment to Essence of Care led to improvements in care, the factors that influenced their role in the process and the organisational factors that influenced benchmarking. An ethnographic case study approach was adopted. Six themes emerged from the data. Two organisational issues emerged: leadership and the values and/or culture of the organisation. The findings suggested that the leadership ability of the Essence of Care link nurses and the value placed on this work by the organisation were key to the success of benchmarking. A model for successful implementation of the Essence of Care is proposed based on the findings of this study, which lends itself to testing by other organisations.

  12. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  13. Data in support of energy performance of double-glazed windows.

    PubMed

    Shakouri, Mahmoud; Banihashemi, Saeed

    2016-06-01

    This paper provides the data used in a research project to propose a new simplified windows rating system based on saved annual energy ("Developing an empirical predictive energy-rating model for windows by using Artificial Neural Network" (Shakouri Hassanabadi and Banihashemi Namini, 2012) [1], "Climatic, parametric and non-parametric analysis of energy performance of double-glazed windows in different climates" (Banihashemi et al., 2015) [2]). A full factorial simulation study was conducted to evaluate the performance of 26 different types of windows in a four-story residential building. In order to generalize the results, the selected windows were tested in four climates of cold, tropical, temperate, and hot and arid; and four different main orientations of North, West, South and East. The accompanied datasets include the annual saved cooling and heating energy in different climates and orientations by using the selected windows. Moreover, a complete dataset is provided that includes the specifications of 26 windows, climate data, month, and orientation of the window. This dataset can be used to make predictive models for energy efficiency assessment of double glazed windows.

  14. Computing an optimal time window of audiovisual integration in focused attention tasks: illustrated by studies on effect of age and prior knowledge.

    PubMed

    Colonius, Hans; Diederich, Adele

    2011-07-01

    The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.

  15. Imaging windows for long-term intravital imaging

    PubMed Central

    Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco

    2014-01-01

    Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure. PMID:28243510

  16. Imaging windows for long-term intravital imaging: General overview and technical insights.

    PubMed

    Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco

    2014-01-01

    Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure.

  17. Comparison of the Intensity of Ventilation at Windows Exchange in the Room - Case Study

    NASA Astrophysics Data System (ADS)

    Kapalo, Peter; Voznyak, Orest

    2017-06-01

    Doing the replacement of old wooden windows in a new plastic windows, in the old buildings, we get the great reducing of the building heat loss. Simpler maintenance and attendance of window is the next advantage. New windows are characterized by better tightness. The aim of the article is determination due to the performed experimental measurements, how much more are reduce the uncontrolled ventilation that is caused of the infiltration windows. In the article there is presented the experimental measurement of indoor air quality in the room in two phases. In the first phase there is the room installed by 55 year old wood window. In the second phase there is the same room installed by new plastic window. Due to the experimental measurement of indoor air quality it is calculated intensity of ventilation - infiltration. These results of ventilation intensity are reciprocally compared.

  18. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  19. School-Based Cognitive-Behavioral Therapy for Adolescent Depression: A Benchmarking Study

    ERIC Educational Resources Information Center

    Shirk, Stephen R.; Kaplinski, Heather; Gudmundsen, Gretchen

    2009-01-01

    The current study evaluated cognitive-behavioral therapy (CBT) for adolescent depression delivered in health clinics and counseling centers in four high schools. Outcomes were benchmarked to results from prior efficacy trials. Fifty adolescents diagnosed with depressive disorders were treated by eight doctoral-level psychologists who followed a…

  20. Low-E Retrofit Demonstration and Educational Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culp, Thomas D; Wiehagen, Joseph; Drumheller, S Craig

    The objective of this project was to demonstrate the capability of low-emissivity (low-E) storm windows / panels and low-E retrofit glazing systems to significantly and cost effectively improve the energy efficiency of both existing residential and commercial buildings. The key outcomes are listed below: RESIDENTIAL CASE STUDIES: (a) A residential case study in two large multifamily apartment buildings in Philadelphia showed a substantial 18-22% reduction in heating energy use and a 9% reduction in cooling energy use by replacing old clear glass storm windows with modern low-E storm windows. Furthermore, the new low-E storm windows reduced the overall apartment airmore » leakage by an average of 10%. (b) Air leakage testing on interior low-E panels installed in a New York City multifamily building over windows with and without AC units showed that the effective leakage area of the windows was reduced by 77-95%. (c) To study the use of low-E storm windows in a warmer mixed climate with a balance of both heating and cooling, 10 older homes near Atlanta with single pane windows were tested with three types of exterior storm windows: clear glass, low-E glass with high solar heat gain, and low-E glass with lower solar heat gain. The storm windows significantly reduced the overall home air leakage by an average of 17%, or 3.7 ACH50. Considerably high variability in the data made it difficult to draw strong conclusions about the overall energy usage, but for heating periods, the low-E storm windows showed approximately 15% heating energy savings, whereas clear storm windows were neutral in performance. For cooling periods, the low-E storm windows showed a wide range of performance from 2% to over 30% cooling energy savings. Overall, the study showed the potential for significantly more energy savings from using low-E glass versus no storm window or clear glass storm windows in warmer mixed climates, but it is difficult to conclusively say whether one type of low-E performed better than the other. COMMERCIAL CASE STUDIES: (a) A 12-story office building in Philadelphia was retrofitted by adding a double-pane low-E insulating glass unit to the existing single pane windows, to create a triple glazed low-E system. A detailed side-by-side comparison in two pairs of perimeter offices facing north and east showed a 39-60% reduction in heating energy use, a 9-36% reduction in cooling energy use, and a 10% reduction in peak electrical cooling demand. An analysis of utility bills estimated the whole building heating and cooling energy use was reduced by over 25%. Additionally, the retrofit window temperatures were commonly 20 degrees warmer on winter days, and 10-20 degrees cooler on summer days, leading to increased occupant comfort. (b) Two large 4-story office buildings in New Jersey were retrofitted with a similar system, but using two low-E coatings in the retrofit system. The energy savings are being monitored by a separate GPIC project; this work quantified the changes in glass surface temperatures, thermal comfort, and potential glass thermal stress. The low-E retrofit panels greatly reduced daily variations in the interior window surface temperatures, lowering the maximum temperature and raising the minimum temperature by over 20F compared to the original single pane windows with window film. The number of hours of potential thermal discomfort, as measured by deviation between mean radiant temperature and ambient air temperature by more than 3F, were reduced by 93 percent on the south orientation and over two-thirds on the west orientation. Overall, the low-E retrofit led to substantially improved occupant comfort with less periods of both overheating and feeling cold. (c) No significant thermal stress was observed in the New Jersey office building test window when using the low-E retrofit system over a variety of weather conditions. The surface temperature difference only exceeded 10F (500 psi thermal stress) for less than 1.5% of the monitored time, and in all cases, the maximum surface temperature difference never exceeded 35F (1,750 psi thermal stress). LOW-E STORM WINDOW OUTREACH AND EDUCATION PROGRAM: (a) The project team assisted the State of Pennsylvania in adding low-E storm windows as a cost effective weatherization measure on its priority list for the state weatherization assistance program. (b) No technical barriers that could hinder widespread application were identified in the case studies. However, educational barriers have been identified, in that weatherization personnel commonly misunderstand how the application of low-E storm windows is very different than much more expensive full window replacement. (c) A package of educational materials was developed to help communicate the benefits of low-E storm windows and retrofits as a cost effective tool for weatherization personnel. (d) Using detailed thermal simulations, more accurate U-factor and solar heat gain coefficient (SHGC) values were determined for low-E storm windows installed over different primary windows. IN SUMMARY, this work confirmed the potential for low-E storm windows, panels, and retrofit systems to provide significant energy savings, reductions in air leakage, and improvements in thermal comfort in both residential and commercial existing buildings.« less

  1. Research on the honeycomb restrain layer application to the high power microwave dielectric window

    NASA Astrophysics Data System (ADS)

    Zhang, Qingyuan; Shao, Hao; Huang, Wenhua; Guo, Letian

    2018-01-01

    Dielectric window breakdown is an important problem of high power microwave radiation. A honeycomb layer can suppress the multipactor in two directions to restrain dielectric window breakdown. This paper studies the effect of the honeycomb restrain layer on improving the dielectric window power capability. It also studies the multipactor suppression mechanism by using the electromagnetic particle-in-cell software, gives the design method, and accomplishes the test experiment. The experimental results indicated that the honeycomb restrain layer can effectively improve the power capability twice.

  2. Research on the honeycomb restrain layer application to the high power microwave dielectric window.

    PubMed

    Zhang, Qingyuan; Shao, Hao; Huang, Wenhua; Guo, Letian

    2018-01-01

    Dielectric window breakdown is an important problem of high power microwave radiation. A honeycomb layer can suppress the multipactor in two directions to restrain dielectric window breakdown. This paper studies the effect of the honeycomb restrain layer on improving the dielectric window power capability. It also studies the multipactor suppression mechanism by using the electromagnetic particle-in-cell software, gives the design method, and accomplishes the test experiment. The experimental results indicated that the honeycomb restrain layer can effectively improve the power capability twice.

  3. A study of methods of prediction and measurement of the transmission sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Forssen, B.; Wang, Y. S.; Crocker, M. J.

    1981-01-01

    Several aspects were studied. The SEA theory was used to develop a theoretical model to predict the transmission loss through an aircraft window. This work mainly consisted of the writing of two computer programs. One program predicts the sound transmission through a plexiglass window (the case of a single partition). The other program applies to the case of a plexiglass window window with a window shade added (the case of a double partition with an air gap). The sound transmission through a structure was measured in experimental studies using several different methods in order that the accuracy and complexity of all the methods could be compared. Also, the measurements were conducted on the simple model of a fuselage (a cylindrical shell), on a real aircraft fuselage, and on stiffened panels.

  4. A study of methods of prediction and measurement of the transmission sound through the walls of light aircraft

    NASA Astrophysics Data System (ADS)

    Forssen, B.; Wang, Y. S.; Crocker, M. J.

    1981-12-01

    Several aspects were studied. The SEA theory was used to develop a theoretical model to predict the transmission loss through an aircraft window. This work mainly consisted of the writing of two computer programs. One program predicts the sound transmission through a plexiglass window (the case of a single partition). The other program applies to the case of a plexiglass window window with a window shade added (the case of a double partition with an air gap). The sound transmission through a structure was measured in experimental studies using several different methods in order that the accuracy and complexity of all the methods could be compared. Also, the measurements were conducted on the simple model of a fuselage (a cylindrical shell), on a real aircraft fuselage, and on stiffened panels.

  5. Iterated local search algorithm for solving the orienteering problem with soft time windows.

    PubMed

    Aghezzaf, Brahim; Fahim, Hassan El

    2016-01-01

    In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.

  6. Mesothelial cell proliferation in the scala tympani: a reaction to the rupture of the round window membrane.

    PubMed

    Sone, M

    1998-10-01

    The inner layer of the round window membrane is composed of mesothelial cells and this mesothelial cell layer extends to the scala tympani. This study describes the histopathologic findings of temporal bone analysis from a patient with bilateral perilymphatic fistula of the round window membrane. The left ear showed proliferation of mesothelial cells in the scala tympani of the basal turn adjoining the round window membrane. This cell proliferation is thought to be a reaction to the rupture of the round window membrane.

  7. Window acoustic study for advanced turboprop aircraft

    NASA Technical Reports Server (NTRS)

    Prydz, R. A.; Balena, F. J.

    1984-01-01

    An acoustic analysis was performed to establish window designs for advanced turboprop powered aircraft. The window transmission loss requirements were based on A-weighted interior noise goals of 80 and 75 dBA. The analytical results showed that a triple pane window consisting of two glass outer panes and an inner pane of acrylic would provide the required transmission loss and meet the sidewall space limits. Two window test articles were fabricated for laboratory evaluation and verification of the predicted transmission loss. Procedures for performing laboratory tests are presented.

  8. Window Observational Research Facility (WORF)

    NASA Technical Reports Server (NTRS)

    Pelfrey, Joseph; Sledd, Annette

    2007-01-01

    This viewgraph document concerns the Window Observational Research Facility (WORF) Rack, a unique facility designed for use with the US Lab Destiny Module window. WORF will provide valuable resources for Earth Science payloads along with serving the purpose of protecting the lab window. The facility can be used for remote sensing instrumentation test and validation in a shirt sleeve environment. WORF will also provide a training platform for crewmembers to do orbital observations of other planetary bodies. WORF payloads will be able to conduct terrestrial studies utilizing the data collected from utilizing WORF and the lab window.

  9. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  10. Developing and Trialling an independent, scalable and repeatable IT-benchmarking procedure for healthcare organisations.

    PubMed

    Liebe, J D; Hübner, U

    2013-01-01

    Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.

  11. An approach to estimate body dimensions through constant body ratio benchmarks.

    PubMed

    Chao, Wei-Cheng; Wang, Eric Min-Yang

    2010-12-01

    Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. A Mid-scala Cochlear Implant Electrode Design Achieves a Stable Post-surgical Position in the Cochlea of Patients Over Time-A Prospective Observational Study.

    PubMed

    Dees, Guido; Smits, Jeroen Jules; Janssen, A Miranda L; Hof, Janny R; Gazibegovic, Dzemal; Hoof, Marc van; Stokroos, Robert J

    2018-04-01

    Cochlear implant (CI) electrode design impacts the clinical performance of patients. Stability and the occurrence of electrode array migration, which is the postoperative movement of the electrode array, were investigated using a mid-scalar electrode array and postoperative image analysis. A prospective observational study was conducted. A mid-scalar electrode was surgically placed using a mastoidectomy, followed by a posterior tympanotomy and an extended round-window or cochleostomy insertion. A few days after surgery and 3 months later Cone Beam Computed Tomography (CBCT) was performed. The two different CBCT's were fused, and the differences between the electrode positions in three dimensions were calculated (the migration). A migration greater than 0.5 mm was deemed clinically relevant. Fourteen subjects participated. The mid-scalar electrode migrated in one patient (7%). This did not lead to the extrusion of an electrode contact. The mean migration of every individual electrode contact in all patients was 0.36 mm (95% confidence interval 0.22-0.50 mm), which approximates to the estimated measurement error of the CBCT technique. A mid-scalar electrode array achieves a stable position in the cochlea in a small but representative group of patients. The methods applied in this work can be used for providing postoperative feedback for surgeons and for benchmarking electrode designs.

  13. The impact of different climates on window and skylight design for daylighting and passive cooling and heating in residential buildings: A comparative study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Sallal, K.A.

    1999-07-01

    The study aims to explore the effect of different climates on window and skylight design in residential buildings. The study house is evaluated against climates that have design opportunities for passive systems, with emphasis on passive cooling. The study applies a variety of methods to evaluate the design. It has found that earth sheltering and night ventilation have the potential to provide 12--29% and 25--77% of the cooling requirements respectively for the study house in the selected climates. The reduction of the glazing area from 174 ft{sup 2} to 115 ft{sup 2} has different impacts on the cooling energy costmore » in the different climates. In climates such Fresno and Tucson, one should put the cooling energy savings as a priority for window design, particularly when determining the window size. In other climates such as Albuquerque, the priority of window design should be first given to heating savings requirements.« less

  14. The value of "liver windows" settings in the detection of small renal cell carcinomas on unenhanced computed tomography.

    PubMed

    Sahi, Kamal; Jackson, Stuart; Wiebe, Edward; Armstrong, Gavin; Winters, Sean; Moore, Ronald; Low, Gavin

    2014-02-01

    To assess if "liver window" settings improve the conspicuity of small renal cell carcinomas (RCC). Patients were analysed from our institution's pathology-confirmed RCC database that included the following: (1) stage T1a RCCs, (2) an unenhanced computed tomography (CT) abdomen performed ≤ 6 months before histologic diagnosis, and (3) age ≥ 17 years. Patients with multiple tumours, prior nephrectomy, von Hippel-Lindau disease, and polycystic kidney disease were excluded. The unenhanced CT was analysed, and the tumour locations were confirmed by using corresponding contrast-enhanced CT or magnetic resonance imaging studies. Representative single-slice axial, coronal, and sagittal unenhanced CT images were acquired in "soft tissue windows" (width, 400 Hounsfield unit (HU); level, 40 HU) and liver windows (width, 150 HU; level, 88 HU). In addition, single-slice axial, coronal, and sagittal unenhanced CT images of nontumourous renal tissue (obtained from the same cases) were acquired in soft tissue windows and liver windows. These data sets were randomized, unpaired, and were presented independently to 3 blinded radiologists for analysis. The presence or absence of suspicious findings for tumour was scored on a 5-point confidence scale. Eighty-three of 415 patients met the study criteria. Receiver operating characteristics (ROC) analysis, t test analysis, and kappa analysis were used. ROC analysis showed statistically superior diagnostic performance for liver windows compared with soft tissue windows (area under the curve of 0.923 vs 0.879; P = .0002). Kappa statistics showed "good" vs "moderate" agreement between readers for liver windows compared with soft tissue windows. Use of liver windows settings improves the detection of small RCCs on the unenhanced CT. Copyright © 2014 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  15. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  16. Symmetrical Windowing for Quantum States in Quasi-Classical Trajectory Simulations

    NASA Astrophysics Data System (ADS)

    Cotton, Stephen Joshua

    An approach has been developed for extracting approximate quantum state-to-state information from classical trajectory simulations which "quantizes" symmetrically both the initial and final classical actions associated with the degrees of freedom of interest using quantum number bins (or "window functions") which are significantly narrower than unit-width. This approach thus imposes a more stringent quantization condition on classical trajectory simulations than has been traditionally employed, while doing so in a manner that is time-symmetric and microscopically reversible. To demonstrate this "symmetric quasi-classical" (SQC) approach for a simple real system, collinear H + H2 reactive scattering calculations were performed [S.J. Cotton and W.H. Miller, J. Phys. Chem. A 117, 7190 (2013)] with SQC-quantization applied to the H 2 vibrational degree of freedom (DOF). It was seen that the use of window functions of approximately 1/2-unit width led to calculated reaction probabilities in very good agreement with quantum mechanical results over the threshold energy region, representing a significant improvement over what is obtained using the traditional quasi-classical procedure. The SQC approach was then applied [S.J. Cotton and W.H. Miller, J. Chem. Phys. 139, 234112 (2013)] to the much more interesting and challenging problem of incorporating non-adiabatic effects into what would otherwise be standard classical trajectory simulations. To do this, the classical Meyer-Miller (MM) Hamiltonian was used to model the electronic DOFs, with SQC-quantization applied to the classical "electronic" actions of the MM model---representing the occupations of the electronic states---in order to extract the electronic state population dynamics. It was demonstrated that if one ties the zero-point energy (ZPE) of the electronic DOFs to the SQC windowing function's width parameter this very simple SQC/MM approach is capable of quantitatively reproducing quantum mechanical results for a range of standard benchmark models of electronically non-adiabatic processes, including applications where "quantum" coherence effects are significant. Notably, among these benchmarks was the well-studied "spin-boson" model of condensed phase non-adiabatic dynamics, in both its symmetric and asymmetric forms---the latter of which many classical approaches fail to treat successfully. The SQC/MM approach to the treatment of non-adiabatic dynamics was next applied [S.J. Cotton, K. Igumenshchev, and W.H. Miller, J. Chem. Phys., 141, 084104 (2014)] to several recently proposed models of condensed phase electron transfer (ET) processes. For these problems, a flux-side correlation function framework modified for consistency with the SQC approach was developed for the calculation of thermal ET rate constants, and excellent accuracy was seen over wide ranges of non-adiabatic coupling strength and energetic bias/exothermicity. Significantly, the "inverted regime" in thermal rate constants (with increasing bias) known from Marcus Theory was reproduced quantitatively for these models---representing the successful treatment of another regime that classical approaches generally have difficulty in correctly describing. Relatedly, a model of photoinduced proton coupled electron transfer (PCET) was also addressed, and it was shown that the SQC/MM approach could reasonably model the explicit population dynamics of the photoexcited electron donor and acceptor states over the four parameter regimes considered. The potential utility of the SQC/MM technique lies in its stunning simplicity and the ease by which it may readily be incorporated into "ordinary" molecular dynamics (MD) simulations. In short, a typical MD simulation may be augmented to take non-adiabatic effects into account simply by introducing an auxiliary pair of classical "electronic" action-angle variables for each energetically viable Born-Oppenheimer surface, and time-evolving these auxiliary variables via Hamilton's equations (using the MM electronic Hamiltonian) in the same manner that the other classical variables---i.e., the coordinates of all the nuclei---are evolved forward in time. In a complex molecular system involving many hundreds or thousands of nuclear DOFs, the propagation of these extra "electronic" variables represents a modest increase in computational effort, and yet, the examples presented herein suggest that in many instances the SQC/MM approach will describe the true non-adiabatic quantum dynamics to a reasonable and useful degree of quantitative accuracy.

  17. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus.

    PubMed

    Nobels, Frank; Debacker, Noëmi; Brotons, Carlos; Elisaf, Moses; Hermans, Michel P; Michel, Georges; Muls, Erik

    2011-09-22

    To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Recruitment was completed in December 2008 with 3994 evaluable patients. This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. NCT00681850.

  18. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus

    PubMed Central

    2011-01-01

    Background To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Methods Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Results Recruitment was completed in December 2008 with 3994 evaluable patients. Conclusions This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. Trial registration NCT00681850 PMID:21939502

  19. Correlates of avian building strikes at a glass façade museum surrounded by avian habitat

    NASA Astrophysics Data System (ADS)

    Kahle, L.; Flannery, M.; Dumbacher, J. P.

    2013-12-01

    Bird window collisions are the second largest anthropogenic cause of bird deaths in the world. Effective mitigation requires an understanding of which birds are most likely to strike, when, and why. Here, we examine five years of avian window strike data from the California Academy of Sciences - a relatively new museum with significant glass façade situated in Golden Gate Park, San Francisco. We examine correlates of window-killed birds, including age, sex, season, and migratory or sedentary tendencies of the birds. We also examine correlates of window kills such as presence of habitat surrounding the building and overall window area. We found that males are almost three times more likely than females to mortally strike windows, and immature birds are three times more abundant than adults in our window kill dataset. Among seasons, strikes were not notably different in spring, summer, and fall; however they were notably reduced in winter. There was no statistical effect of building orientation (north, south, east, or west), and the presence of avian habitat directly adjacent to windows had a minor effect. We also report ongoing studies examining various efforts to reduce window kill (primarily external decals and large electronic window blinds.) We hope that improving our understanding of the causes of the window strikes will help us strategically reduce window strikes.

  20. Policy mapping for establishing a national emergency health policy for Nigeria

    PubMed Central

    Aliyu, Zakari Y

    2002-01-01

    Background The number of potential life years lost due to accidents and injuries though poorly studied has resulted in tremendous economic and social loss to Nigeria. Numerous socio-cultural, economic and political factors including the current epidemic of ethnic and religious conflicts act in concert in predisposing to and enabling the ongoing catastrophe of accident and injuries in Nigeria. Methods Using the "policymaker", Microsoft-Windows® based software, the information generated on accidents and injuries and emergency health care in Nigeria from literature review, content analysis of relevant documents, expert interviewing and consensus opinion, a model National Emergency Health Policy was designed and analyzed. A major point of analysis for the policy is the current political feasibility of the policy including its opportunities and obstacles in the country. Results A model National Emergency Health Policy with policy goals, objectives, programs and evaluation benchmarks was generated. Critical analyses of potential policy problems, associated multiple players, diverging interests and implementation guidelines were developed. Conclusions "Political health modeling" a term proposed here would be invaluable to policy makers and scholars in developing countries in assessing the political feasibility of policy managing. Political modeling applied to the development of a NEHP in Nigeria would empower policy makers and the policy making process and would ensure a sustainable emergency health policy in Nigeria. PMID:12181080

  1. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    NASA Astrophysics Data System (ADS)

    Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.

    2015-09-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  2. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation.

    PubMed

    Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M

    2015-09-07

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  3. Further potentials in the joint implementation of life cycle assessment and data envelopment analysis.

    PubMed

    Iribarren, Diego; Vázquez-Rowe, Ian; Moreira, María Teresa; Feijoo, Gumersindo

    2010-10-15

    The combined application of Life Cycle Assessment and Data Envelopment Analysis has been recently proposed to provide a tool for the comprehensive assessment of the environmental and operational performance of multiple similar entities. Among the acknowledged advantages of LCA+DEA methodology, eco-efficiency verification and avoidance of average inventories are usually highlighted. However, given the novelty of LCA+DEA methods, a high number of additional potentials remain unexplored. In this sense, there are some features that are worth detailing given their wide interest to enhance LCA performance. Emphasis is laid on the improved interpretation of LCA results through the complementary use of DEA with respect to: (i) super-efficiency analysis to facilitate the selection of reference performers, (ii) inter- and intra-assessments of multiple data sets within any specific sector with benchmarking and trend analysis purposes, (iii) integration of an economic dimension in order to enrich sustainability assessments, and (iv) window analysis to evaluate environmental impact efficiency over a certain period of time. Furthermore, the capability of LCA+DEA methodology to be generally implemented in a wide range of scenarios is discussed. These further potentials are explained and demonstrated via the presentation of brief case studies based on real data sets. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Windows of sensitivity to toxic chemicals in the motor effects development.

    PubMed

    Ingber, Susan Z; Pohl, Hana R

    2016-02-01

    Many chemicals currently used are known to elicit nervous system effects. In addition, approximately 2000 new chemicals introduced annually have not yet undergone neurotoxicity testing. This review concentrated on motor development effects associated with exposure to environmental neurotoxicants to help identify critical windows of exposure and begin to assess data needs based on a subset of chemicals thoroughly reviewed by the Agency for Toxic Substances and Disease Registry (ATSDR) in Toxicological Profiles and Addenda. Multiple windows of sensitivity were identified that differed based on the maturity level of the neurological system at the time of exposure, as well as dose and exposure duration. Similar but distinct windows were found for both motor activity (GD 8-17 [rats], GD 12-14 and PND 3-10 [mice]) and motor function performance (insufficient data for rats, GD 12-17 [mice]). Identifying specific windows of sensitivity in animal studies was hampered by study designs oriented towards detection of neurotoxicity that occurred at any time throughout the developmental process. In conclusion, while this investigation identified some critical exposure windows for motor development effects, it demonstrates a need for more acute duration exposure studies based on neurodevelopmental windows, particularly during the exposure periods identified in this review. Published by Elsevier Inc.

  5. Windows of sensitivity to toxic chemicals in the motor effects development✩

    PubMed Central

    Ingber, Susan Z.; Pohl, Hana R.

    2017-01-01

    Many chemicals currently used are known to elicit nervous system effects. In addition, approximately 2000 new chemicals introduced annually have not yet undergone neurotoxicity testing. This review concentrated on motor development effects associated with exposure to environmental neurotoxicants to help identify critical windows of exposure and begin to assess data needs based on a subset of chemicals thoroughly reviewed by the Agency for Toxic Substances and Disease Registry (ATSDR) in Toxicological Profiles and Addenda. Multiple windows of sensitivity were identified that differed based on the maturity level of the neurological system at the time of exposure, as well as dose and exposure duration. Similar but distinct windows were found for both motor activity (GD 8–17 [rats], GD 12–14 and PND 3–10 [mice]) and motor function performance (insufficient data for rats, GD 12–17 [mice]). Identifying specific windows of sensitivity in animal studies was hampered by study designs oriented towards detection of neurotoxicity that occurred at any time throughout the developmental process. In conclusion, while this investigation identified some critical exposure windows for motor development effects, it demonstrates a need for more acute duration exposure studies based on neurodevelopmental windows, particularly during the exposure periods identified in this review. PMID:26686904

  6. Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states

    PubMed Central

    Shakil, Sadia; Lee, Chin-Hui; Keilholz, Shella Dawn

    2016-01-01

    A promising recent development in the study of brain function is the dynamic analysis of resting-state functional MRI scans, which can enhance understanding of normal cognition and alterations that result from brain disorders. One widely used method of capturing the dynamics of functional connectivity is sliding window correlation (SWC). However, in the absence of a “gold standard” for comparison, evaluating the performance of the SWC in typical resting-state data is challenging. This study uses simulated networks (SNs) with known transitions to examine the effects of parameters such as window length, window offset, window type, noise, filtering, and sampling rate on the SWC performance. The SWC time course was calculated for all node pairs of each SN and then clustered using the k-means algorithm to determine how resulting brain states match known configurations and transitions in the SNs. The outcomes show that the detection of state transitions and durations in the SWC is most strongly influenced by the window length and offset, followed by noise and filtering parameters. The effect of the image sampling rate was relatively insignificant. Tapered windows provide less sensitivity to state transitions than rectangular windows, which could be the result of the sharp transitions in the SNs. Overall, the SWC gave poor estimates of correlation for each brain state. Clustering based on the SWC time course did not reliably reflect the underlying state transitions unless the window length was comparable to the state duration, highlighting the need for new adaptive window analysis techniques. PMID:26952197

  7. A comparative study of standard intensity-modulated radiotherapy and RapidArc planning techniques for ipsilateral and bilateral head and neck irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pursley, Jennifer, E-mail: jpursley@mgh.harvard.edu; Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA; Damato, Antonio L.

    The purpose of this study was to investigate class solutions using RapidArc volumetric-modulated arc therapy (VMAT) planning for ipsilateral and bilateral head and neck (H&N) irradiation, and to compare dosimetric results with intensity-modulated radiotherapy (IMRT) plans. A total of 14 patients who received ipsilateral and 10 patients who received bilateral head and neck irradiation were retrospectively replanned with several volumetric-modulated arc therapy techniques. For ipsilateral neck irradiation, the volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the contralateral parotid, two 260° or 270° arcs, and two 210° arcs. For bilateral neck irradiation, themore » volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the shoulders, and 3 arcs. All patients had a sliding-window-delivery intensity-modulated radiotherapy plan that was used as the benchmark for dosimetric comparison. For ipsilateral neck irradiation, a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid was dosimetrically comparable to intensity-modulated radiotherapy, with improved conformity (conformity index = 1.22 vs 1.36, p < 0.04) and lower contralateral parotid mean dose (5.6 vs 6.8 Gy, p < 0.03). For bilateral neck irradiation, 3-arc volumetric-modulated arc therapy techniques were dosimetrically comparable to intensity-modulated radiotherapy while also avoiding irradiation through the shoulders. All volumetric-modulated arc therapy techniques required fewer monitor units than sliding-window intensity-modulated radiotherapy to deliver treatment, with an average reduction of 35% for ipsilateral plans and 67% for bilateral plans. Thus, for ipsilateral head and neck irradiation a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid is recommended. For bilateral neck irradiation, 2- or 3-arc techniques are dosimetrically comparable to intensity-modulated radiotherapy, but more work is needed to determine the optimal approaches by disease site.« less

  8. A comparative study of standard intensity-modulated radiotherapy and RapidArc planning techniques for ipsilateral and bilateral head and neck irradiation.

    PubMed

    Pursley, Jennifer; Damato, Antonio L; Czerminska, Maria A; Margalit, Danielle N; Sher, David J; Tishler, Roy B

    2017-01-01

    The purpose of this study was to investigate class solutions using RapidArc volumetric-modulated arc therapy (VMAT) planning for ipsilateral and bilateral head and neck (H&N) irradiation, and to compare dosimetric results with intensity-modulated radiotherapy (IMRT) plans. A total of 14 patients who received ipsilateral and 10 patients who received bilateral head and neck irradiation were retrospectively replanned with several volumetric-modulated arc therapy techniques. For ipsilateral neck irradiation, the volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the contralateral parotid, two 260° or 270° arcs, and two 210° arcs. For bilateral neck irradiation, the volumetric-modulated arc therapy techniques included two 360° arcs, two 360° arcs with avoidance sectors around the shoulders, and 3 arcs. All patients had a sliding-window-delivery intensity-modulated radiotherapy plan that was used as the benchmark for dosimetric comparison. For ipsilateral neck irradiation, a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid was dosimetrically comparable to intensity-modulated radiotherapy, with improved conformity (conformity index = 1.22 vs 1.36, p < 0.04) and lower contralateral parotid mean dose (5.6 vs 6.8Gy, p < 0.03). For bilateral neck irradiation, 3-arc volumetric-modulated arc therapy techniques were dosimetrically comparable to intensity-modulated radiotherapy while also avoiding irradiation through the shoulders. All volumetric-modulated arc therapy techniques required fewer monitor units than sliding-window intensity-modulated radiotherapy to deliver treatment, with an average reduction of 35% for ipsilateral plans and 67% for bilateral plans. Thus, for ipsilateral head and neck irradiation a volumetric-modulated arc therapy technique using two 360° arcs with avoidance sectors around the contralateral parotid is recommended. For bilateral neck irradiation, 2- or 3-arc techniques are dosimetrically comparable to intensity-modulated radiotherapy, but more work is needed to determine the optimal approaches by disease site. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  9. A Better Benchmark Assessment: Multiple-Choice versus Project-Based

    ERIC Educational Resources Information Center

    Peariso, Jamon F.

    2006-01-01

    The purpose of this literature review and Ex Post Facto descriptive study was to determine which type of benchmark assessment, multiple-choice or project-based, provides the best indication of general success on the history portion of the CST (California Standards Tests). The result of the study indicates that although the project-based benchmark…

  10. Benchmarking: A Study of School and School District Effect and Efficiency.

    ERIC Educational Resources Information Center

    Swanson, Austin D.; Engert, Frank

    The "New York State School Report Card" provides a vehicle for benchmarking with respect to student achievement. In this study, additional tools were developed for making external comparisons with respect to achievement, and tools were added for assessing fiscal policy and efficiency. Data from school years 1993-94 through 1995-96 were…

  11. Short-Term Field Study Programs: A Holistic and Experiential Approach to Learning

    ERIC Educational Resources Information Center

    Long, Mary M.; Sandler, Dennis M.; Topol, Martin T.

    2017-01-01

    For business schools, AACSB and Middle States' call for more experiential learning is one reason to provide study abroad programs. Universities must attend to the demand for continuous improvement and employ metrics to benchmark and evaluate their relative standing among peer institutions. One such benchmark is the National Survey of Student…

  12. Benchmarking Investments in Advancement: Results of the Inaugural CASE Advancement Investment Metrics Study (AIMS). CASE White Paper

    ERIC Educational Resources Information Center

    Kroll, Juidith A.

    2012-01-01

    The inaugural Advancement Investment Metrics Study, or AIMS, benchmarked investments and staffing in each of the advancement disciplines (advancement services, alumni relations, communications and marketing, fundraising and advancement management) as well as the return on the investment in fundraising specifically. This white paper reports on the…

  13. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    ERIC Educational Resources Information Center

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  14. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  15. Optimisation of the round window opening in cochlear implant surgery in wet and dry conditions: impact on intracochlear pressure changes.

    PubMed

    Mittmann, Philipp; Ernst, A; Mittmann, M; Todt, I

    2016-11-01

    To preserve residual hearing in cochlear implant candidates, the atraumatic insertion of the cochlea electrode has become a focus of cochlea implant research. In a previous study, intracochlear pressure changes during the opening of the round window membrane were investigated. In the current study, intracochlear pressure changes during opening of the round window membrane under dry and transfluid conditions were investigated. Round window openings were performed in an artificial cochlear model. Intracochlear pressure changes were measured using a micro-optical pressure sensor, which was placed in the apex. Openings of the round window membrane were performed under dry and wet conditions using a cannula and a diode laser. Statistically significant differences in the intracochlear pressure changes were seen between the different methods used for opening of the round window membrane. Lower pressure changes were seen by opening the round window membrane with the diode laser than with the cannula. A significant difference was seen between the dry and wet conditions. The atraumatic approach to the cochlea is assumed to be essential for the preservation of residual hearing. Opening of the round window under wet conditions produce a significant advantage on intracochlear pressure changes in comparison to dry conditions by limiting negative outward pressure.

  16. Benchmarking: measuring the outcomes of evidence-based practice.

    PubMed

    DeLise, D C; Leasure, A R

    2001-01-01

    Measurement of the outcomes associated with implementation of evidence-based practice changes is becoming increasingly emphasized by multiple health care disciplines. A final step to the process of implementing and sustaining evidence-supported practice changes is that of outcomes evaluation and monitoring. The comparison of outcomes to internal and external measures is known as benchmarking. This article discusses evidence-based practice, provides an overview of outcomes evaluation, and describes the process of benchmarking to improve practice. A case study is used to illustrate this concept.

  17. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 2; Methodology Application Software Toolbox

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.

  18. HRCT Correlation with Round Window Identification during Cochlear Implantation in Children.

    PubMed

    Pendem, Sai Kiran; Rangasami, Rajeswaran; Arunachalam, Ravi Kumar; Mohanarangam, Venkata Sai Pulivadulu; Natarajan, Paarthipan

    2014-01-01

    To determine the accuracy of High Resolution Computer Tomography (HRCT) temporal bone measurements in predicting the actual visualization of round window niche as viewed through posterior tympanotomy (i.e. facial recess). This is a prospective study of 37 cochlear implant candidates, aged between 1and 6 years, who were referred for HRCT temporal bone during the period December 2013 to July 2014. Cochlear implantation was done in 37 children (25 in the right ear and 12 in the left ear). The distance between the short process of incus and the round window niche and the distance between the oval window and the round window niche were measured preoperatively on sub-millimeter (0.7 mm) HRCT images. We classified the visibility of round window niche based on the surgical view (i.e. through posterior tympanotomy) during surgery into three types: 1) Type 1- fully visible, 2) Type 2- partially visible, and 3) Type 3- difficult to visualize. The preoperative HRCT measurements were used to predict the type of visualization of round window niche before surgery and correlated with the findings during surgery. The mean and standard deviation for the distance between the short process of incus and the round window niche and for the distance between the oval window and the round window niche for Types 1, 2, and 3 were 8.5 ± 0.2 mm and 3.2 ± 0.2 mm, 8.0 ± 0.4 mm and 3.8 ± 0.2 mm, 7.5 ± 0.2 mm and 4.4 ± 0.2 mm respectively, and showed statistically significant difference (P < 0.01) between them. The preoperative HRCT measurements had a sensitivity and specificity of 92.3% and 96.2%, respectively, in determining the actual visualization of round window niche. This study shows preoperative HRCT temporal bone measurements are useful in predicting the actual visualization of round window niche as viewed through posterior tympanotomy.

  19. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  20. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  1. Yoga for military service personnel with PTSD: A single arm study.

    PubMed

    Johnston, Jennifer M; Minami, Takuya; Greenwald, Deborah; Li, Chieh; Reinhardt, Kristen; Khalsa, Sat Bir S

    2015-11-01

    This study evaluated the effects of yoga on posttraumatic stress disorder (PTSD) symptoms, resilience, and mindfulness in military personnel. Participants completing the yoga intervention were 12 current or former military personnel who met the Diagnostic and Statistical Manual for Mental Disorders-Fourth Edition-Text Revision (DSM-IV-TR) diagnostic criteria for PTSD. Results were also benchmarked against other military intervention studies of PTSD using the Clinician Administered PTSD Scale (CAPS; Blake et al., 2000) as an outcome measure. Results of within-subject analyses supported the study's primary hypothesis that yoga would reduce PTSD symptoms (d = 0.768; t = 2.822; p = .009) but did not support the hypothesis that yoga would significantly increase mindfulness (d = 0.392; t = -0.9500; p = .181) and resilience (d = 0.270; t = -1.220; p = .124) in this population. Benchmarking results indicated that, as compared with the aggregated treatment benchmark (d = 1.074) obtained from published clinical trials, the current study's treatment effect (d = 0.768) was visibly lower, and compared with the waitlist control benchmark (d = 0.156), the treatment effect in the current study was visibly higher. (c) 2015 APA, all rights reserved).

  2. Potential Deep Seated Landslide Mapping from Various Temporal Data - Benchmark, Aerial Photo, and SAR

    NASA Astrophysics Data System (ADS)

    Wang, Kuo-Lung; Lin, Jun-Tin; Lee, Yi-Hsuan; Lin, Meei-Ling; Chen, Chao-Wei; Liao, Ray-Tang; Chi, Chung-Chi; Lin, Hsi-Hung

    2016-04-01

    Landslide is always not hazard until mankind development in highly potential area. The study tries to map deep seated landslide before the initiation of landslide. Study area in central Taiwan is selected and the geological condition is quite unique, which is slate. Major direction of bedding in this area is northeast and the dip ranges from 30-75 degree to southeast. Several deep seated landslides were discovered in the same side of bedding from rainfall events. The benchmarks from 2002 ~ 2009 are in this study. However, the benchmarks were measured along Highway No. 14B and the road was constructed along the peak of mountains. Taiwan located between sea plates and continental plate. The elevation of mountains is rising according to most GPS and benchmarks in the island. The same trend is discovered from benchmarks in this area. But some benchmarks are located in landslide area thus the elevation is below average and event negative. The aerial photos from 1979 to 2007 are used for orthophoto generation. The changes of land use are obvious during 30 years and enlargement of river channel is also observed in this area. Both benchmarks and aerial photos have discovered landslide potential did exist this area but how big of landslide in not easy to define currently. Thus SAR data utilization is adopted in this case. DInSAR and SBAS sar analysis are used in this research and ALOS/PALSAR from 2006 to 2010 is adopted. DInSAR analysis shows that landslide is possible mapped but the error is not easy to reduce. The error is possibly form several conditions such as vegetation, clouds, vapor, etc. To conquer the problem, time series analysis, SBAS, is adopted in this research. The result of SBAS in this area shows that large deep seated landslides are easy mapped and the accuracy of vertical displacement is reasonable.

  3. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  4. Study of wavefront error and polarization of a side mounted infrared window

    NASA Astrophysics Data System (ADS)

    Liu, Jiaguo; Li, Lin; Hu, Xinqi; Yu, Xin

    2008-03-01

    The wavefront error and polarization of a side mounted infrared window made of ZnS are studied. The Infrared windows suffer from temperature gradient and stress during their launch process. Generally, the gradient in temperature changes the refractive index of the material whereas stress produces deformation and birefringence. In this paper, a thermal finite element analysis (FEA) of an IR window is presented. For this purpose, we employed an FEA program Ansys to obtain the time-varying temperature field. The deformation and stress of the window are derived from a structural FEA with the aerodynamic force and the temperature field previously obtained as being the loads. The deformation, temperature field, stress field, ray tracing and Jones Calculus are used to calculate the wavefront error and the change of polarization state.

  5. Ecophysiological function of leaf 'windows' in Lithops species - 'Living Stones' that grow underground.

    PubMed

    Martin, C E; Brandmeyer, E A; Ross, R D

    2013-01-01

    Leaf temperatures were lower when light entry at the leaf tip window was prevented through covering the window with reflective tape, relative to leaf temperatures of plants with leaf tip windows covered with transparent tape. This was true when leaf temperatures were measured with an infrared thermometer, but not with a fine-wire thermocouple. Leaf tip windows of Lithops growing in high-rainfall regions of southern Africa were larger than the windows of plants (numerous individuals of 17 species) growing in areas with less rainfall and, thus, more annual insolation. The results of this study indicate that leaf tip windows of desert plants with an underground growth habit can allow entry of supra-optimal levels of radiant energy, thus most likely inhibiting photosynthetic activity. Consequently, the size of the leaf tip windows correlates inversely with habitat solar irradiance, minimising the probability of photoinhibition, while maximising the absorption of irradiance in cloudy, high-rainfall regions. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.

  6. A scan statistic for identifying optimal risk windows in vaccine safety studies using self-controlled case series design.

    PubMed

    Xu, Stanley; Hambidge, Simon J; McClure, David L; Daley, Matthew F; Glanz, Jason M

    2013-08-30

    In the examination of the association between vaccines and rare adverse events after vaccination in postlicensure observational studies, it is challenging to define appropriate risk windows because prelicensure RCTs provide little insight on the timing of specific adverse events. Past vaccine safety studies have often used prespecified risk windows based on prior publications, biological understanding of the vaccine, and expert opinion. Recently, a data-driven approach was developed to identify appropriate risk windows for vaccine safety studies that use the self-controlled case series design. This approach employs both the maximum incidence rate ratio and the linear relation between the estimated incidence rate ratio and the inverse of average person time at risk, given a specified risk window. In this paper, we present a scan statistic that can identify appropriate risk windows in vaccine safety studies using the self-controlled case series design while taking into account the dependence of time intervals within an individual and while adjusting for time-varying covariates such as age and seasonality. This approach uses the maximum likelihood ratio test based on fixed-effects models, which has been used for analyzing data from self-controlled case series design in addition to conditional Poisson models. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  8. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  9. Facility Benchmarking Trends in Tertiary Education - An Australian Case Study.

    ERIC Educational Resources Information Center

    Fisher, Kenn

    2001-01-01

    Presents how Australia's facility managers are responding to the growing impact of tertiary education participation and the increase in educational facility usage. Topics cover strategic asset management and the benchmarking of education physical assets and postsecondary institutions. (GR)

  10. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Systematic search for wide periodic windows and bounds for the set of regular parameters for the quadratic map.

    PubMed

    Galias, Zbigniew

    2017-05-01

    An efficient method to find positions of periodic windows for the quadratic map f(x)=ax(1-x) and a heuristic algorithm to locate the majority of wide periodic windows are proposed. Accurate rigorous bounds of positions of all periodic windows with periods below 37 and the majority of wide periodic windows with longer periods are found. Based on these results, we prove that the measure of the set of regular parameters in the interval [3,4] is above 0.613960137. The properties of periodic windows are studied numerically. The results of the analysis are used to estimate that the true value of the measure of the set of regular parameters is close to 0.6139603.

  12. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  13. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  14. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    PubMed

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  15. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  16. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  17. Ocean Engineering Studies Compiled 1991. Volume 6. Acrylic Windows - Typical Applications in Pressure Housings

    DTIC Science & Technology

    1991-01-01

    either the metallic or plastic composite pressure envelope. The ASME Boiler and Pressure Vessel Code Section 8 provides such design criteria, and the...fabricated of metallic or piastic composite materials. To preclude potential catastrophic failures of windows designed on the basis of inadequate data, in...pressure-resistant acrylic windows (reference 12). Acrylic windows are usually machined from Plexiglas G plate, which is limited in thickness to 4 inches

  18. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  19. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  20. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  1. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  2. Professional Learning: Trends in State Efforts. Benchmarking State Implementation of College- and Career-Readiness Standards

    ERIC Educational Resources Information Center

    Anderson, Kimberly; Mire, Mary Elizabeth

    2016-01-01

    This report presents a multi-year study of how states are implementing their state college- and career-readiness standards. In this report, the Southern Regional Education Board's (SREB's) Benchmarking State Implementation of College- and Career-Readiness Standards project studied state efforts in 2014-15 and 2015-16 to foster effective…

  3. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows.

    PubMed

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Röösli, Martin; Brink, Mark; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-18

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios-of open, tilted, and closed windows-were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor-indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor-indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows.

  4. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  5. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  6. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  7. Paediatric International Nursing Study: using person-centred key performance indicators to benchmark children's services.

    PubMed

    McCance, Tanya; Wilson, Val; Kornman, Kelly

    2016-07-01

    The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.

  8. Comparison Spatial Pattern of Land Surface Temperature with Mono Window Algorithm and Split Window Algorithm: A Case Study in South Tangerang, Indonesia

    NASA Astrophysics Data System (ADS)

    Bunai, Tasya; Rokhmatuloh; Wibowo, Adi

    2018-05-01

    In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.

  9. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Systems design study of the Pioneer Venus spacecraft. Appendices to volume 1, sections 3-6 (part 1 of 3). [design of Venus probe windows

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design is described of the Venus probe windows, which are required to measure solar flux, infrared flux, aureole, and cloud particles. Window heating and structural materials for the probe window assemblies are discussed along with the magnetometer. The command lists for science, power and communication requirements, telemetry sign characteristics, mission profile summary, mass properties of payloads, and failure modes are presented.

  11. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  12. A CASE STUDY USING THE EPA'S WATER QUALITY MODELING SYSTEM, THE WINDOWS INTERFACE FOR SIMULATING PLUMES (WISP)

    EPA Science Inventory

    Wisp, the Windows Interface for Simulating Plumes, is designed to be an easy-to-use windows platform program for aquatic modeling. Wisp inherits many of its capabilities from its predecessor, the DOS-based PLUMES (Baumgartner, Frick, Roberts, 1994). These capabilities have been ...

  13. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  14. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  15. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  16. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  17. Impact of coupling techniques of an active middle ear device to the round window membrane for the backward stimulation of the cochlea.

    PubMed

    Gostian, Antoniu-Oreste; Pazen, David; Ortmann, Magdalene; Luers, Jan-Christoffer; Anagiotos, Andreas; Hüttenbrink, Karl-Bernd; Beutner, Dirk

    2015-01-01

    Interposed cartilage and the round window coupler (RWC) increase the efficiency of cochlea stimulation with the floating mass transducer (FMT) of a single active middle ear implant (AMEI) placed against the round window membrane. Treatment of mixed and conductive hearing loss with an AMEI attached to the round window is effective, yet the best placement technique of its FMT for the most efficient stimulation of the cochlea remains to be determined. Experimental study on human temporal bones with the FMT placed against firstly the unaltered round window niche and then subsequently against the fully exposed round window membrane with and without interposed cartilage and the RWC. Cochlea stimulation is measured by the volume velocities of the stapes footplate using LASER vibrometry. At the undrilled round window niche, placement of the FMT by itself and with the RWC resulted in similar volume velocities. The response was significantly raised by interposing cartilage into the undrilled round window niche. Complete exposure of the round window membrane allowed for significantly increased volume velocities. Among these, coupling of the FMT with interposed cartilage yielded responses of similar magnitude compared with the RWC but significantly higher compared with the FMT by itself. Good contact to the round window membrane is essential for efficient stimulation of the cochlea. Therefore, interposing cartilage into the undrilled round window niche is a viable option. At the drilled round window membrane, the FMT with interposed cartilage and attached to the RWC are similarly effective.

  18. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  19. An Examination of Five Benchmarks of Student Engagement for Commuter Students Enrolled at an Urban Public University

    ERIC Educational Resources Information Center

    Galladian, Carol

    2013-01-01

    The purpose of this quantitative ex post facto study was to provide a description of the student engagement of commuter students attending a large urban public university located in a mid-Atlantic state using the five National Survey of Student Engagement (NSSE) benchmarks of student engagement. In addition, the study examined the relationship…

  20. Social Studies: Grades 4, 8, & 11. Content Specifications for Statewide Assessment by Standard.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

    This state of Missouri guide to content specifications for social studies assessment is designed to give teachers direction for assessment at the benchmark levels of grades 4, 8, and 11 for each standard that is appropriate for a statewide assessment. The guide includes specifications of what students are expected to know at the benchmark levels…

  1. Lobster eye as a collector for water window microscopy

    NASA Astrophysics Data System (ADS)

    Pina, L.; Maršíková, V.; Inneman, A.; Nawaz, M. F.; Jančárek, A.; Havlíková, R.

    2017-08-01

    Imaging in EUV, SXR and XR spectral bands of radiation is of increasing interest. Material science, biology and hot plasma are examples of relevant fast developing areas. Applications include spectroscopy, astrophysics, Soft X-ray Ray metrology, Water Window microscopy, radiography and tomography. Especially Water Window imaging has still not fully recognized potential in biology and medicine microscopy applications. Theoretical study and design of Lobster Eye (LE) optics as a collector for water window (WW) microscopy and comparison with a similar size ellipsoidal mirror condensor are presented.

  2. Parametric study of beam refraction problems across laser anemometer windows

    NASA Technical Reports Server (NTRS)

    Owen, A. K.

    1986-01-01

    The experimenter is often required to view flows through a window with a different index of refraction than either the medium being observed or the medium that the laser anemometer is immersed in. The refraction that occurs at the window surfaces may lead to undesirable changes in probe volume position or beam crossing angle and can lead to partial or complete beam uncrossing. This report describes the results of a parametric study of this problem using a ray tracing technique to predict these changes. The windows studied were a flat plate and a simple cyclinder. For the flat-plate study: (1) surface thickness, (2) beam crossing angle, (3) bisecting line - surface normal angle, and (4) incoming beam plane surface orientation were varied. For the cylindrical window additional parameters were also varied: (1) probe volume immersion, (2) probe volume off-radial position, and (3) probe volume position out of the R-theta plane of the lens. A number of empirical correlations were deduced to aid the interested reader in determining the movement, uncrossing, and change in crossing angle for a particular situation.

  3. A parametric study of the beam refraction problems across laser anemometer windows

    NASA Technical Reports Server (NTRS)

    Owen, Albert K.

    1986-01-01

    The experimenter is often required to view flows through a window with a different index of refraction than either the medium being observed or the medium that the laser anemometer is immersed in. The refraction that occurs at the window surfaces may lead to undesirable changes in probe volume position or beam crossing angle and can lead to partial or complete beam uncrossing. This report describes the results of a parametric study of this problem using a ray tracing technique to predict these changes. The windows studied were a flat plate and a simple cylinder. For the flat-plate study: (1) surface thickness, (2) beam crossing angle, (3) bisecting line - surface normal angle, and (4) incoming beam plane surface orientation were varied. For the cylindrical window additional parameters were also varied: (1) probe volume immersion, (2) probe volume off-radial position, and (3) probe volume position out of the r-theta plane of the lens. A number of empirical correlations were deduced to aid the reader in determining the movement, uncrossing, and change in crossing angle for a particular situations.

  4. Information Literacy and Office Tool Competencies: A Benchmark Study

    ERIC Educational Resources Information Center

    Heinrichs, John H.; Lim, Jeen-Su

    2010-01-01

    Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…

  5. Design and comparison of laser windows for high-power lasers

    NASA Astrophysics Data System (ADS)

    Niu, Yanxiong; Liu, Wenwen; Liu, Haixia; Wang, Caili; Niu, Haisha; Man, Da

    2014-11-01

    High-power laser systems are getting more and more widely used in industry and military affairs. It is necessary to develop a high-power laser system which can operate over long periods of time without appreciable degradation in performance. When a high-energy laser beam transmits through a laser window, it is possible that the permanent damage is caused to the window because of the energy absorption by window materials. So, when we design a high-power laser system, a suitable laser window material must be selected and the laser damage threshold of the window must be known. In this paper, a thermal analysis model of high-power laser window is established, and the relationship between the laser intensity and the thermal-stress field distribution is studied by deducing the formulas through utilizing the integral-transform method. The influence of window radius, thickness and laser intensity on the temperature and stress field distributions is analyzed. Then, the performance of K9 glass and the fused silica glass is compared, and the laser-induced damage mechanism is analyzed. Finally, the damage thresholds of laser windows are calculated. The results show that compared with K9 glass, the fused silica glass has a higher damage threshold due to its good thermodynamic properties. The presented theoretical analysis and simulation results are helpful for the design and selection of high-power laser windows.

  6. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  7. Intelligent windows using new thermotropic layers with long-term stability

    NASA Astrophysics Data System (ADS)

    Watanabe, Haruo

    1995-08-01

    This paper concerns the autonomous responsive type light adjustment window (intelligent windows) among smart windows which adjust the light upon receiving environmental energy. More specifically, this is a thermotropic window panel that laminates and seals a new type of highly viscous polymer aqueous solution gel. A conventional thermotropic window panel has never been put to practical use since the reversible change between the colorless, transparent state (water-clear) and translucent scattered state (paper-white) with uniformity was not possible. The change involved phase separation and generated non-uniformity. The author, after fundamental studies of hydrophobic bonding, successfully solved the problem by developing a polymer aqueous solution gel with amphiphatic molecule as the third component in addition to water and water-soluble polymer with hydrophobic radical, based on the molecular spacer concept. In addition, the author established peripheral technologies and succeeded in experimentally fabricating a panel type 'Affinity's Intelligent Window (AIW)' that has attained the level of practical use.

  8. Prevention of root caries with dentin adhesives.

    PubMed

    Grogono, A L; Mayo, J A

    1994-04-01

    This in vitro investigation determined the feasibility of using dentin adhesives to protect root surfaces against caries. The roots of 22 recently extracted human teeth were all painted with a protective lacquer leaving two unprotected small windows. On each specimen, one window (control) was left untreated and the other window (experimental) was treated using a dentin adhesive (Scotchbond Multi-Purpose). The roots were then immersed in an in vitro acetate/calcium/phosphate demineralization model at pH 4.3. After 70 days, the samples were removed and sectioned through the windows. The undecalcified ground sections were examined under transmitted and polarized light. Lesions characteristic of natural root caries were seen in the untreated control windows. No such lesions were apparent in the experimental windows. The results of this preliminary study suggest that dentin adhesives may provide protection against root caries.

  9. Exclusion of particulate allergens by window air conditioners.

    PubMed

    Solomon, W R; Burge, H A; Boise, J R

    1980-04-01

    Effects of window air-conditioner operation on intramural particle levels were assessed in the bedrooms of 20 homes and in 10 outpatient clinic examining rooms during late summer periods. At each site, pollen and spore collections in the mechanically cooled room and a normally ventilated counterpart were compared using volumetric impactors. Substantially lower particle recoveries (median = 16/m3) were found in air-conditioned rooms than in those with open windows alone (median = 253 particles/m3). Furthermore, substantial exclusion of small (e.g., Ganoderma spores) as well as large (ragweed pollens) aerosol components were found by window units. Control studies within normally ventilated rooms and outside their open windows showed a marked but variable inward flux of particles. Window units appear to substantially reduce indoor allergan levels by maintaining the isolation of enclosed spaces from particle-bearing outdoor air.

  10. Optical Evaluation of DMDs with UV-Grade FS, Sapphire, MgF2 Windows and Reflectance of Bare Devices

    NASA Technical Reports Server (NTRS)

    Quijada, Manuel A.; Heap, Sara; Travinsky, Anton; Vorobiev, Dmitry; Ninkov, Zoran; Raisanen, Alan; Roberto, Massimo

    2016-01-01

    Digital Micro-mirror Devices (DMDs) have been identified as an alternative to microshutter arrays for space-based multi-object spectrometers (MOS). Specifically, the MOS at the heart of a proposed Galactic Evolution Spectroscopic Explorer (GESE) that uses the DMD as a reprogrammable slit mask. Unfortunately, the protective borosilicate windows limit the use of DMDs in the UV and IR regimes, where the glass has insufficient throughput. In this work, we present our efforts to replace standard DMD windows with custom windows made from UV-grade fused silica, Low Absorption Optical Sapphire (LAOS) and magnesium fluoride. We present reflectance measurements of the antireflection coated windows and a reflectance study of the DMDs active area (window removed). Furthermore, we investigated the long-term stability of the DMD reflectance and recoating device with fresh Al coatings.

  11. Evaluation of Building Energy Saving Through the Development of Venetian Blinds' Optimal Control Algorithm According to the Orientation and Window-to-Wall Ratio

    NASA Astrophysics Data System (ADS)

    Kwon, Hyuk Ju; Yeon, Sang Hun; Lee, Keum Ho; Lee, Kwang Ho

    2018-02-01

    As various studies focusing on building energy saving have been continuously conducted, studies utilizing renewable energy sources, instead of fossil fuel, are needed. In particular, studies regarding solar energy are being carried out in the field of building science; in order to utilize such solar energy effectively, solar radiation being brought into the indoors should be acquired and blocked properly. Blinds are a typical solar radiation control device that is capable of controlling indoor thermal and light environments. However, slat-type blinds are manually controlled, giving a negative effect on building energy saving. In this regard, studies regarding the automatic control of slat-type blinds have been carried out for the last couple of decades. Therefore, this study aims to provide preliminary data for optimal control research through the controlling of slat angle in slat-type blinds by comprehensively considering various input variables. The window area ratio and orientation were selected as input variables. It was found that an optimal control algorithm was different among each window-to-wall ratio and window orientation. In addition, through comparing and analyzing the building energy saving performance for each condition by applying the developed algorithms to simulations, up to 20.7 % energy saving was shown in the cooling period and up to 12.3 % energy saving was shown in the heating period. In addition, building energy saving effect was greater as the window area ratio increased given the same orientation, and the effects of window-to-wall ratio in the cooling period were higher than those of window-to-wall ratio in the heating period.

  12. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  13. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  14. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  15. Paradoxical ventilator associated pneumonia incidences among selective digestive decontamination studies versus other studies of mechanically ventilated patients: benchmarking the evidence base

    PubMed Central

    2011-01-01

    Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897

  16. Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.

    PubMed

    Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue

    2015-01-01

    As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.

  17. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  18. Novel readout method for molecular diagnostic assays based on optical measurements of magnetic nanobead dynamics.

    PubMed

    Donolato, Marco; Antunes, Paula; Bejhed, Rebecca S; Zardán Gómez de la Torre, Teresa; Østerberg, Frederik W; Strömberg, Mattias; Nilsson, Mats; Strømme, Maria; Svedlindh, Peter; Hansen, Mikkel F; Vavassori, Paolo

    2015-02-03

    We demonstrate detection of DNA coils formed from a Vibrio cholerae DNA target at picomolar concentrations using a novel optomagnetic approach exploiting the dynamic behavior and optical anisotropy of magnetic nanobead (MNB) assemblies. We establish that the complex second harmonic optical transmission spectra of MNB suspensions measured upon application of a weak uniaxial AC magnetic field correlate well with the rotation dynamics of the individual MNBs. Adding a target analyte to the solution leads to the formation of permanent MNB clusters, namely, to the suppression of the dynamic MNB behavior. We prove that the optical transmission spectra are highly sensitive to the formation of permanent MNB clusters and, thereby to the target analyte concentration. As a specific clinically relevant diagnostic case, we detect DNA coils formed via padlock probe recognition and isothermal rolling circle amplification and benchmark against a commercial equipment. The results demonstrate the fast optomagnetic readout of rolling circle products from bacterial DNA utilizing the dynamic properties of MNBs in a miniaturized and low-cost platform requiring only a transparent window in the chip.

  19. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  20. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  1. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  2. The regrets of procrastination in climate policy

    NASA Astrophysics Data System (ADS)

    Keller, Klaus; Robinson, Alexander; Bradford, David F.; Oppenheimer, Michael

    2007-04-01

    Anthropogenic carbon dioxide (CO2) emissions are projected to impose economic costs due to the associated climate change impacts. Climate change impacts can be reduced by abating CO2 emissions. What would be an economically optimal investment in abating CO2 emissions? Economic models typically suggest that reducing CO2 emissions by roughly ten to twenty per cent relative to business-as-usual would be an economically optimal strategy. The currently implemented CO2 abatement of a few per cent falls short of this benchmark. Hence, the global community may be procrastinating in implementing an economically optimal strategy. Here we use a simple economic model to estimate the regrets of this procrastination—the economic costs due to the suboptimal strategy choice. The regrets of procrastination can range from billions to trillions of US dollars. The regrets increase with increasing procrastination period and with decreasing limits on global mean temperature increase. Extended procrastination may close the window of opportunity to avoid crossing temperature limits interpreted by some as 'dangerous anthropogenic interference with the climate system' in the sense of Article 2 of the United Nations Framework Convention on Global Climate Change.

  3. Authentication Based on Pole-zero Models of Signature Velocity

    PubMed Central

    Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad

    2013-01-01

    With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797

  4. West Village Student Housing Phase I: Apartment Monitoring and Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    German, A.; Bell, C.; Dakin, B.

    Building America team Alliance for Residential Building Innovation (ARBI) worked with the University of California, Davis (UC Davis) and the developer partner West Village Community Partnership (WVCP) to evaluate performance on 192 student apartments completed in September, 2011 as part of Phase I of the multi-purpose West Village project. West Village, the largest planned zero net energy community in the United States. The campus neighborhood is designed to enable faculty, staff and students to affordably live near campus, take advantage of environmentally friendly transportation options, and participate fully in campus life. The aggressive energy efficiency measures that are incorporated inmore » the design contribute to source energy reductions of 37% over the B10 Benchmark. The energy efficiency measures that are incorporated into these apartments include increased wall & attic insulation, high performance windows, high efficiency heat pumps for heating and cooling, central heat pump water heaters (HPWHs), 100% high efficacy lighting, and ENERGY STAR major appliances. Results discuss how measured energy use compares to modeling estimates over a 10 month monitoring period and includes a cost effective evaluation.« less

  5. West Village Student Housing Phase I: Apartment Monitoring and Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    German, A.; Bell, C.; Dakin, B.

    Building America team Alliance for Residential Building Innovation (ARBI) worked with the University of California, Davis and the developer partner West Village Community Partnership (WVCP) to evaluate performance on 192 student apartments completed in September, 2011 as part of Phase I of the multi-purpose West Village project. West Village is the largest planned zero net energy community in the United States. The campus neighborhood is designed to enable faculty, staff, and students to affordably live near campus, take advantage of environmentally friendly transportation options, and participate fully in campus life. The aggressive energy efficiency measures that are incorporated in themore » design contribute to source energy reductions of 37% over the B10 Benchmark. These measures include increased wall and attic insulation, high performance windows, high efficiency heat pumps for heating and cooling, central heat pump water heaters (HPWHs), 100% high efficacy lighting, and ENERGY STAR major appliances. The report discusses how measured energy use compares to modeling estimates over a 10-month monitoring period and includes a cost effective evaluation.« less

  6. Radiative Cooling: Principles, Progress, and Potentials

    PubMed Central

    Hossain, Md. Muntasir

    2016-01-01

    The recent progress on radiative cooling reveals its potential for applications in highly efficient passive cooling. This approach utilizes the maximized emission of infrared thermal radiation through the atmospheric window for releasing heat and minimized absorption of incoming atmospheric radiation. These simultaneous processes can lead to a device temperature substantially below the ambient temperature. Although the application of radiative cooling for nighttime cooling was demonstrated a few decades ago, significant cooling under direct sunlight has been achieved only recently, indicating its potential as a practical passive cooler during the day. In this article, the basic principles of radiative cooling and its performance characteristics for nonradiative contributions, solar radiation, and atmospheric conditions are discussed. The recent advancements over the traditional approaches and their material and structural characteristics are outlined. The key characteristics of the thermal radiators and solar reflectors of the current state‐of‐the‐art radiative coolers are evaluated and their benchmarks are remarked for the peak cooling ability. The scopes for further improvements on radiative cooling efficiency for optimized device characteristics are also theoretically estimated. PMID:27812478

  7. Partial stapled hemorrhoidopexy: a minimally invasive technique for hemorrhoids.

    PubMed

    Lin, Hong-Cheng; He, Qiu-Lan; Ren, Dong-Lin; Peng, Hui; Xie, Shang-Kui; Su, Dan; Wang, Xiao-Xue

    2012-09-01

    This study was designed to assess the safety, efficacy, and postoperative outcomes of partial stapled hemorrhoidopexy (PSH). A prospective study was conducted between February and March 2010. PSH was performed with single-window anoscopes for single isolated hemorrhoids, bi-window anoscopes for two isolated hemorrhoids, and tri-window anoscopes for three isolated hemorrhoids or circumferential hemorrhoids. The data pertaining to demographics, preoperative characteristics and postoperative outcomes were collected and analyzed. Forty-four eligible patients underwent PSH. Single-window anoscopes were used in 2 patients, and bi- and tri-window anoscopes in 6 and 36 patients. The blood loss in patients with single-window, bi-window, and tri-window anoscopes was 6.0 ml (range 5.0-7.0 ml), 5.0 ml (range 5.0-6.5 ml), and 5.0 ml (4.5-14.5 ml) (P = 0.332). The mean postoperative visual analog scale score for pain was 3 (range, 1-4), 2 (range 1-4), 3 (range 2-6), 1 (range 0-3), 1 (range 0-2) and 2 (range 2-4) at 12 h, days 1, 2, 3, and 7, and at first defecation. The rate of urgency was 9.1%. No patients developed anal incontinence or stenosis. The 1-year recurrence rate of prolapsing hemorrhoids was 2.3%. Partial stapled hemorrhoidopexy appears to be a safe and effective technique for grade III-IV hemorrhoids. Encouragingly, PSH is associated with mild postoperative pain, few urgency episodes, and no stenosis or anal incontinence.

  8. climwin: An R Toolbox for Climate Window Analysis.

    PubMed

    Bailey, Liam D; van de Pol, Martijn

    2016-01-01

    When studying the impacts of climate change, there is a tendency to select climate data from a small set of arbitrary time periods or climate windows (e.g., spring temperature). However, these arbitrary windows may not encompass the strongest periods of climatic sensitivity and may lead to erroneous biological interpretations. Therefore, there is a need to consider a wider range of climate windows to better predict the impacts of future climate change. We introduce the R package climwin that provides a number of methods to test the effect of different climate windows on a chosen response variable and compare these windows to identify potential climate signals. climwin extracts the relevant data for each possible climate window and uses this data to fit a statistical model, the structure of which is chosen by the user. Models are then compared using an information criteria approach. This allows users to determine how well each window explains variation in the response variable and compare model support between windows. climwin also contains methods to detect type I and II errors, which are often a problem with this type of exploratory analysis. This article presents the statistical framework and technical details behind the climwin package and demonstrates the applicability of the method with a number of worked examples.

  9. Cross-Evaluation of Degree Programmes in Higher Education

    ERIC Educational Resources Information Center

    Kettunen, Juha

    2010-01-01

    Purpose: This study seeks to develop and describe the benchmarking approach of enhancement-led evaluation in higher education and to present a cross-evaluation process for degree programmes. Design/methodology/approach: The benchmarking approach produces useful information for the development of degree programmes based on self-evaluation,…

  10. Establishing Language Benchmarks for Children with Typically Developing Language and Children with Language Impairment

    ERIC Educational Resources Information Center

    Schmitt, Mary Beth; Logan, Jessica A. R.; Tambyraja, Sherine R.; Farquharson, Kelly; Justice, Laura M.

    2017-01-01

    Purpose: Practitioners, researchers, and policymakers (i.e., stakeholders) have vested interests in children's language growth yet currently do not have empirically driven methods for measuring such outcomes. The present study established language benchmarks for children with typically developing language (TDL) and children with language…

  11. Benchmarking Academic Libraries: An Australian Case Study.

    ERIC Educational Resources Information Center

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  12. Benchmarking 2009: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  13. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  14. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  15. Image-guided adaptive gating of lung cancer radiotherapy: a computer simulation study

    NASA Astrophysics Data System (ADS)

    Aristophanous, Michalis; Rottmann, Joerg; Park, Sang-June; Nishioka, Seiko; Shirato, Hiroki; Berbeco, Ross I.

    2010-08-01

    The purpose of this study is to investigate the effect that image-guided adaptation of the gating window during treatment could have on the residual tumor motion, by simulating different gated radiotherapy techniques. There are three separate components of this simulation: (1) the 'Hokkaido Data', which are previously measured 3D data of lung tumor motion tracks and the corresponding 1D respiratory signals obtained during the entire ungated radiotherapy treatments of eight patients, (2) the respiratory gating protocol at our institution and the imaging performed under that protocol and (3) the actual simulation in which the Hokkaido Data are used to select tumor position information that could have been collected based on the imaging performed under our gating protocol. We simulated treatments with a fixed gating window and a gating window that is updated during treatment. The patient data were divided into different fractions, each with continuous acquisitions longer than 2 min. In accordance to the imaging performed under our gating protocol, we assume that we have tumor position information for the first 15 s of treatment, obtained from kV fluoroscopy, and for the rest of the fractions the tumor position is only available during the beam-on time from MV imaging. The gating window was set according to the information obtained from the first 15 s such that the residual motion was less than 3 mm. For the fixed gating window technique the gate remained the same for the entire treatment, while for the adaptive technique the range of the tumor motion during beam-on time was measured and used to adapt the gating window to keep the residual motion below 3 mm. The algorithm used to adapt the gating window is described. The residual tumor motion inside the gating window was reduced on average by 24% for the patients with regular breathing patterns and the difference was statistically significant (p-value = 0.01). The magnitude of the residual tumor motion depended on the regularity of the breathing pattern suggesting that image-guided adaptive gating should be combined with breath coaching. The adaptive gating window technique was able to track the exhale position of the breathing cycle quite successfully. Out of a total of 53 fractions the duty cycle was greater than 20% for 42 fractions for the fixed gating window technique and for 39 fractions for the adaptive gating window technique. The results of this study suggest that real-time updating of the gating window can result in reliably low residual tumor motion and therefore can facilitate safe margin reduction.

  16. Benchmarking facilities providing care: An international overview of initiatives

    PubMed Central

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  17. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows

    PubMed Central

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-01

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios—of open, tilted, and closed windows—were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor–indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor–indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows. PMID:29346318

  18. Windows of achievement for development milestones of Sri Lankan infants and toddlers: estimation through statistical modelling.

    PubMed

    Thalagala, N

    2015-11-01

    The normative age ranges during which cohorts of children achieve milestones are called windows of achievement. The patterns of these windows of achievement are known to be both genetically and environmentally dependent. This study aimed to determine the windows of achievement for motor, social emotional, language and cognitive development milestones for infants and toddlers in Sri Lanka. A set of 293 milestones identified through a literature review were subjected to content validation using parent and expert reviews, which resulted in the selection of a revised set of 277 milestones. Thereafter, a sample of 1036 children from 2 months to 30 months was examined to see whether or not they had attained the selected milestones. Percentile ages of attaining milestone were determined using a rearranged closed form equation related to the logistic regression. The parameters required for calculations were derived through the logistic regression of milestone achievement statuses against ages of children. These percentile ages were used to define the respective windows of achievement. A set of 178 robust indicators that represent motor, socio emotional, language and cognitive development skills and their windows of achievement relevant to 2 to 24 months of age were determined. Windows of achievement for six gross motor milestones determined in the study were shown to closely overlap a similar set of windows of achievement published by the World Health Organization indicating the validity of some findings. A methodology combining the content validation based on qualitative techniques and age validation based on regression modelling found to be effective for determining age percentiles for realizing milestones and determining respective windows of achievement. © 2015 John Wiley & Sons Ltd.

  19. RASSP Benchmark 4 Technical Description.

    DTIC Science & Technology

    1998-01-09

    be carried out. Based on results of the study, an implementation of all, or part, of the system described in this benchmark technical description...validate interface and timing constraints. The ISA level of modeling defines the limit of detail expected in the VHDL virtual prototype. It does not...develop a set of candidate architectures and perform an architecture trade-off study. Candidate proces- sor implementations must then be examined for

  20. A Benchmark Study of Large Contract Supplier Monitoring Within DOD and Private Industry

    DTIC Science & Technology

    1994-03-01

    83 2. Long Term Supplier Relationships ...... .. 84 3. Global Sourcing . . . . . . . . . . . . .. 85 4. Refocusing on Customer Quality...monitoring and recognition, reduced number of suppliers, global sourcing, and long term contractor relationships . These initiatives were then compared to DCMC...on customer quality. 14. suBJE.C TERMS Benchmark Study of Large Contract Supplier Monitoring. 15. NUMBER OF PAGES108 16. PRICE CODE 17. SECURITY

  1. Middle Level Teachers' Perceptions of Interim Reading Assessments: An Exploratory Study of Data-Based Decision Making

    ERIC Educational Resources Information Center

    Reed, Deborah K.

    2015-01-01

    This study explored the data-based decision making of 12 teachers in grades 6-8 who were asked about their perceptions and use of three required interim measures of reading performance: oral reading fluency (ORF), retell, and a benchmark comprised of released state test items. Focus group participants reported they did not believe the benchmark or…

  2. Relationship between the TCAP and the Pearson Benchmark Assessment in Elementary Students' Reading and Math Performance in a Northeastern Tennessee School District

    ERIC Educational Resources Information Center

    Dugger-Roberts, Cherith A.

    2014-01-01

    The purpose of this quantitative study was to determine if there was a relationship between the TCAP test and Pearson Benchmark assessment in elementary students' reading and language arts and math performance in a northeastern Tennessee school district. This study involved 3rd, 4th, 5th, and 6th grade students. The study focused on the following…

  3. Benchmarks of programming languages for special purposes in the space station

    NASA Technical Reports Server (NTRS)

    Knoebel, Arthur

    1986-01-01

    Although Ada is likely to be chosen as the principal programming language for the Space Station, certain needs, such as expert systems and robotics, may be better developed in special languages. The languages, LISP and Prolog, are studied and some benchmarks derived. The mathematical foundations for these languages are reviewed. Likely areas of the space station are sought out where automation and robotics might be applicable. Benchmarks are designed which are functional, mathematical, relational, and expert in nature. The coding will depend on the particular versions of the languages which become available for testing.

  4. Halal Supply Chain Management Streamlined Practices: Issues and Challenges

    NASA Astrophysics Data System (ADS)

    Hijrah Abd Kadir, Muhammad; Zuraidah Raja Mohd Rasi, Raja; Omar, Siti Sarah; Manap, Zariq Imran Abdul

    2016-11-01

    The quickly developing worldwide halal in business sector has given a remarkable window of chance, which empowers Malaysia to the renowned halal centre in worldwide (known as Halal-hubs). Malaysia also has proactively taken a lead in halal activities, which is presently considered as the benchmark for a halal framework worldwide. Malaysia also set up the Halal Industry Development Corporation (HDC) which driving a wide range of halal activities since the demand of halal food has increased significantly which is very crucial for a Muslim in ensuring its authenticity and integrity. Even in parallel to this developments, many studies has been conducted because there are many issues still occurs in the food industry. The issue of consumer awareness and understanding the halal principles, mixing of halal and non- halal products, halal certification and logo compliance with Shariah law and lack of regulation and enforcement need the serious attention by all parties along the supply chain. The challenges occur mainly in the halal food segregation and halal traceability of the products. The unit of analysis in this study different halal stakeholders group which are JAKIM, Halal Development Centre (HDC), Raw Material Manufacturers, Retailers and Government Agencies. This paper attempt discusses the issues and challenges occurs in the halal supply chain and faced by the practitioners as well as the relevant parties involved in the industry especially for food products manufacturers. The study would like to give a basic information about the issues and challenges in the contribution of Halal Supply Chain Management (HSCM) as well as for the future studies.

  5. Bird-Window Collisions at a West-Coast Urban Park Museum: Analyses of Bird Biology and Window Attributes from Golden Gate Park, San Francisco.

    PubMed

    Kahle, Logan Q; Flannery, Maureen E; Dumbacher, John P

    2016-01-01

    Bird-window collisions are a major and poorly-understood generator of bird mortality. In North America, studies of this topic tend to be focused east of the Mississippi River, resulting in a paucity of data from the Western flyways. Additionally, few available data can critically evaluate factors such as time of day, sex and age bias, and effect of window pane size on collisions. We collected and analyzed 5 years of window strike data from a 3-story building in a large urban park in San Francisco, California. To evaluate our window collision data in context, we collected weekly data on local bird abundance in the adjacent parkland. Our study asks two overarching questions: first-what aspects of a bird's biology might make them more likely to fatally strike windows; and second, what characteristics of a building's design contribute to bird-window collisions. We used a dataset of 308 fatal bird strikes to examine the relationships of strikes relative to age, sex, time of day, time of year, and a variety of other factors, including mitigation efforts. We found that actively migrating birds may not be major contributors to collisions as has been found elsewhere. We found that males and young birds were both significantly overrepresented relative to their abundance in the habitat surrounding the building. We also analyzed the effect of external window shades as mitigation, finding that an overall reduction in large panes, whether covered or in some way broken up with mullions, effectively reduced window collisions. We conclude that effective mitigation or design will be required in all seasons, but that breeding seasons and migratory seasons are most critical, especially for low-rise buildings and other sites away from urban migrant traps. Finally, strikes occur throughout the day, but mitigation may be most effective in the morning and midday.

  6. Bird-Window Collisions at a West-Coast Urban Park Museum: Analyses of Bird Biology and Window Attributes from Golden Gate Park, San Francisco

    PubMed Central

    Kahle, Logan Q.; Flannery, Maureen E.; Dumbacher, John P.

    2016-01-01

    Bird-window collisions are a major and poorly-understood generator of bird mortality. In North America, studies of this topic tend to be focused east of the Mississippi River, resulting in a paucity of data from the Western flyways. Additionally, few available data can critically evaluate factors such as time of day, sex and age bias, and effect of window pane size on collisions. We collected and analyzed 5 years of window strike data from a 3-story building in a large urban park in San Francisco, California. To evaluate our window collision data in context, we collected weekly data on local bird abundance in the adjacent parkland. Our study asks two overarching questions: first–what aspects of a bird’s biology might make them more likely to fatally strike windows; and second, what characteristics of a building’s design contribute to bird-window collisions. We used a dataset of 308 fatal bird strikes to examine the relationships of strikes relative to age, sex, time of day, time of year, and a variety of other factors, including mitigation efforts. We found that actively migrating birds may not be major contributors to collisions as has been found elsewhere. We found that males and young birds were both significantly overrepresented relative to their abundance in the habitat surrounding the building. We also analyzed the effect of external window shades as mitigation, finding that an overall reduction in large panes, whether covered or in some way broken up with mullions, effectively reduced window collisions. We conclude that effective mitigation or design will be required in all seasons, but that breeding seasons and migratory seasons are most critical, especially for low-rise buildings and other sites away from urban migrant traps. Finally, strikes occur throughout the day, but mitigation may be most effective in the morning and midday. PMID:26731417

  7. Application of Thinned-Skull Cranial Window to Mouse Cerebral Blood Flow Imaging Using Optical Microangiography

    PubMed Central

    Wang, Ruikang K.

    2014-01-01

    In vivo imaging of mouse brain vasculature typically requires applying skull window opening techniques: open-skull cranial window or thinned-skull cranial window. We report non-invasive 3D in vivo cerebral blood flow imaging of C57/BL mouse by the use of ultra-high sensitive optical microangiography (UHS-OMAG) and Doppler optical microangiography (DOMAG) techniques to evaluate two cranial window types based on their procedures and ability to visualize surface pial vessel dynamics. Application of the thinned-skull technique is found to be effective in achieving high quality images for pial vessels for short-term imaging, and has advantages over the open-skull technique in available imaging area, surgical efficiency, and cerebral environment preservation. In summary, thinned-skull cranial window serves as a promising tool in studying hemodynamics in pial microvasculature using OMAG or other OCT blood flow imaging modalities. PMID:25426632

  8. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    DTIC Science & Technology

    2012-09-01

    Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16     Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized

  9. Benchmarks--Standards Comparisons. Math Competencies: EFF Benchmarks Comparison [and] Reading Competencies: EFF Benchmarks Comparison [and] Writing Competencies: EFF Benchmarks Comparison.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…

  10. Medicare Part D Roulette: Potential Implications of Random Assignment and Plan Restrictions

    PubMed Central

    Patel, Rajul A.; Walberg, Mark P.; Woelfel, Joseph A.; Amaral, Michelle M.; Varu, Paresh

    2013-01-01

    Background Dual-eligible (Medicare/Medicaid) beneficiaries are randomly assigned to a benchmark plan, which provides prescription drug coverage under the Part D benefit without consideration of their prescription drug profile. To date, the potential for beneficiary assignment to a plan with poor formulary coverage has been minimally studied and the resultant financial impact to beneficiaries unknown. Objective We sought to determine cost variability and drug use restrictions under each available 2010 California benchmark plan. Methods Dual-eligible beneficiaries were provided Part D plan assistance during the 2010 annual election period. The Medicare Web site was used to determine benchmark plan costs and prescription utilization restrictions for each of the six California benchmark plans available for random assignment in 2010. A standardized survey was used to record all de-identified beneficiary demographic and plan specific data. For each low-income subsidy-recipient (n = 113), cost, rank, number of non-formulary medications, and prescription utilization restrictions were recorded for each available 2010 California benchmark plan. Formulary matching rates (percent of beneficiary's medications on plan formulary) were calculated for each benchmark plan. Results Auto-assigned beneficiaries had only a 34% chance of being assigned to the lowest cost plan; the remainder faced potentially significant avoidable out-of-pocket costs. Wide variations between benchmark plans were observed for plan cost, formulary coverage, formulary matching rates, and prescription utilization restrictions. Conclusions Beneficiaries had a 66% chance of being assigned to a sub-optimal plan; thereby, they faced significant avoidable out-of-pocket costs. Alternative methods of beneficiary assignment could decrease beneficiary and Medicare costs while also reducing medication non-compliance. PMID:24753963

  11. Transparency of 2μ m window of Titan's atmosphere

    NASA Astrophysics Data System (ADS)

    Rannou, P.; Seignovert, B.; Le Mouélic, S.; Maltagliati, L.; Rey, M.; Sotin, C.

    2018-02-01

    Titan's atmosphere is optically thick and hides the surface and the lower layers from the view at almost all wavelengths. However, because gaseous absorptions are spectrally selective, some narrow spectral intervals are relatively transparent and allow to probe the surface. To use these intervals (called windows) a good knowledge of atmospheric absorption is necessary. Once gas spectroscopic linelists are well established, the absorption inside windows depends on the way the far wings of the methane absorption lines are cut-off. We know that the intensity in all the windows can be explained with the same cut-off parameters, except for the window at 2 μm. This discrepancy is generally treated with a workaround which consists in using a different cut-off description for this specific window. This window is relatively transparent and surface may have specific spectral signatures that could be detected. Thus, a good knowledge of atmosphere opacities is essential and our scope is to better understand what causes the difference between the 2 μm window and the other windows. In this work, we used scattered light at the limb and transmissions in occultation observed with VIMS (Visual Infrared Mapping Spectrometer) onboard Cassini, around the 2 μm window. Data shows an absorption feature that participates to the shape of this window. Our atmospheric model fits well the VIMS data at 2 μm with the same cut-off than for the other windows, provided an additional absorption is introduced in the middle of the window around ≃ 2.065 μm. It explains well the discrepency between the cut-off used at 2 μm, and we show that a gas with a fairly constant mixing ratio, possibly ethane, may be the cause of this absorption. Finally, we studied the impact of this absorption on the retrieval of the surface reflectivity and found that it is significant.

  12. Low-E Storm Windows Gain Acceptance as a Home Weatherization Measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbride, Theresa L.; Cort, Katherine A.

    This article for Home Energy Magazine describes work by the U.S. Department of Energy to develop low-emissivity storm windows as an energy efficiency-retrofit option for existing homes. The article describes the low-emissivity invisible silver metal coatings on the glass, which reflect heat back into the home in winter or back outside in summer and the benefits of low-e storm windows including insulation, air sealing, noise blocking, protection of antique windows, etc. The article also describes Pacific Northwest National Laboratory's efforts on behalf of DOE to overcome market barriers to adoption of the technology, including performance validation studies in the PNNLmore » Lab Homes, cost effectiveness analysis, production of reports, brochures, how-to guides on low-e storm window installation for the Building America Solution Center, and a video posted on YouTube. PNNL's efforts were reviewed by the Pacific Northwest Regional Technical Forum (RTF), which serves as the advisory board to the Pacific Northwest Electric Power Planning Council and Bonneville Power Administration. In late July 2015, the RTF approved the low-e storm window measure’s savings and specifications, a critical step in integrating low-e storm windows into energy-efficiency planning and utility weatherization and incentive programs. PNNL estimates that more than 90 million homes in the United States with single-pane or low-performing double-pane windows would benefit from the technology. Low-e storm windows are suitable not only for private residences but also for small commercial buildings, historic properties, and facilities that house residents, such as nursing homes, dormitories, and in-patient facilities. To further assist in the market transformation of low-e storm windows and other high-efficiency window attachments, DOE helped found the window Attachment Energy Rating Council (AERC) in 2015. AERC is an independent, public interest, non-profit organization whose mission is to rate, label, and certify the performance of window attachments.« less

  13. Benchmarking study of the MCNP code against cold critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    1991-01-01

    The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less

  14. Optical Gaps in Pristine and Heavily Doped Silicon Nanocrystals: DFT versus Quantum Monte Carlo Benchmarks.

    PubMed

    Derian, R; Tokár, K; Somogyi, B; Gali, Á; Štich, I

    2017-12-12

    We present a time-dependent density functional theory (TDDFT) study of the optical gaps of light-emitting nanomaterials, namely, pristine and heavily B- and P-codoped silicon crystalline nanoparticles. Twenty DFT exchange-correlation functionals sampled from the best currently available inventory such as hybrids and range-separated hybrids are benchmarked against ultra-accurate quantum Monte Carlo results on small model Si nanocrystals. Overall, the range-separated hybrids are found to perform best. The quality of the DFT gaps is correlated with the deviation from Koopmans' theorem as a possible quality guide. In addition to providing a generic test of the ability of TDDFT to describe optical properties of silicon crystalline nanoparticles, the results also open up a route to benchmark-quality DFT studies of nanoparticle sizes approaching those studied experimentally.

  15. QUANTIFICATION OF NUCLEOLAR CHANNEL SYSTEMS: UNIFORM PRESENCE THROUGHOUT THE UPPER ENDOMETRIAL CAVITY

    PubMed Central

    Szmyga, Michael J.; Rybak, Eli A.; Nejat, Edward J.; Banks, Erika H.; Whitney, Kathleen D.; Polotsky, Alex J.; Heller, Debra S.; Meier, U. Thomas

    2014-01-01

    Objective To determine the prevalence of nucleolar channel systems (NCSs) by uterine region applying continuous quantification. Design Prospective clinical study. Setting Tertiary care academic medical center. Patients 42 naturally cycling women who underwent hysterectomy for benign indications. Intervention NCS presence was quantified by a novel method in six uterine regions, fundus, left cornu, right cornu, anterior body, posterior body, and lower uterine segment (LUS), using indirect immunofluorescence. Main Outcome Measures Percent of endometrial epithelial cells (EECs) with NCSs per uterine region. Results NCS quantification was observer-independent (intraclass correlation coefficient [ICC] = 0.96) and its intra-sample variability low (coefficient of variability [CV] = 0.06). 11/42 hysterectomy specimens were midluteal, 10 of which were analyzable with 9 containing over 5% EECs with NCSs in at least one region. The percent of EECs with NCSs varied significantly between the lower uterine segment (6.1%; IQR = 3.0-9.9) and the upper five regions (16.9%; IQR = 12.7-23.4) with fewer NCSs in the basal layer of the endometrium (17% +/−6%) versus the middle (46% +/−9%) and luminal layers (38% +/−9%) of all six regions). Conclusions NCS quantification during the midluteal phase demonstrates uniform presence throughout the endometrial cavity, excluding the LUS, with a preference for the functional, luminal layers. Our quantitative NCS evaluation provides a benchmark for future studies and further supports NCS presence as a potential marker for the window of implantation. PMID:23137760

  16. Early continuous white noise exposure alters l-alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunit glutamate receptor 2 and gamma-aminobutyric acid type a receptor subunit beta3 protein expression in rat auditory cortex.

    PubMed

    Xu, Jinghong; Yu, Liping; Zhang, Jiping; Cai, Rui; Sun, Xinde

    2010-02-15

    Auditory experience during the postnatal critical period is essential for the normal maturation of auditory function. Previous studies have shown that rearing infant rat pups under conditions of continuous moderate-level noise delayed the emergence of adult-like topographic representational order and the refinement of response selectivity in the primary auditory cortex (A1) beyond normal developmental benchmarks and indefinitely blocked the closure of a brief, critical-period window. To gain insight into the molecular mechanisms of these physiological changes after noise rearing, we studied expression of the AMPA receptor subunit GluR2 and GABA(A) receptor subunit beta3 in the auditory cortex after noise rearing. Our results show that continuous moderate-level noise rearing during the early stages of development decreases the expression levels of GluR2 and GABA(A)beta3. Furthermore, noise rearing also induced a significant decrease in the level of GABA(A) receptors relative to AMPA receptors. However, in adult rats, noise rearing did not have significant effects on GluR2 and GABA(A)beta3 expression or the ratio between the two units. These changes could have a role in the cellular mechanisms involved in the delayed maturation of auditory receptive field structure and topographic organization of A1 after noise rearing. Copyright 2009 Wiley-Liss, Inc.

  17. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  18. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  19. Study of the effects of condensation on the performance of Pioneer Venus probe windows

    NASA Technical Reports Server (NTRS)

    Testerman, M. K.

    1974-01-01

    The transmission loss of Pioneer Venus Probe radiation windows if their exposed surfaces become contaminated with droplets of water, hydrochloric acid, sulfuric acid, and mercury which may be found in the Venusian atmosphere was investigated. Transmission loss was studied as a function of mass concentration of liquid droplets deposited on one surface of test window materials while the wavelength of the transmitting radiation is in the range of 0.3 to 30 microns. The parameters that affect the transmittance of radiation through a window are: (1) particle size, (2) surface concentration of particles, (3) wavelength of the radiation, (4) angle of acceptance of the radiation by the detector, and (5) the refractive index of the aerosol.

  20. Benchmarking Equity in Transfer Policies for Career and Technical Associate's Degrees

    ERIC Educational Resources Information Center

    Chase, Megan M.

    2011-01-01

    Using critical policy analysis, this study considers state policies that impede technical credit transfer from public 2-year colleges to 4-year institutions of higher education. The states of Ohio, Texas, Washington, and Wisconsin are considered, and seven policy benchmarks for facilitating the transfer of technical credits are proposed. (Contains…

  1. Global Benchmarking of Marketing Doctoral Program Faculty and Institutions by Subarea

    ERIC Educational Resources Information Center

    Elbeck, Matt; Vander Schee, Brian A.

    2014-01-01

    This study benchmarks marketing doctoral programs worldwide in five popular subareas by faculty and institutional scholarly impact. A multi-item approach identifies a collection of top-tier scholarly journals for each subarea, while citation data over the decade 2003 to 2012 identify high scholarly impact marketing faculty by subarea used to…

  2. Policy Analysis of the English Graduation Benchmark in Taiwan

    ERIC Educational Resources Information Center

    Shih, Chih-Min

    2012-01-01

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…

  3. Clinically Significant Change to Establish Benchmarks in Residential Drug and Alcohol Treatment Services

    ERIC Educational Resources Information Center

    Billingham, Daniel D.; Kelly, Peter J.; Deane, Frank P.; Crowe, Trevor P.; Buckingham, Mark S.; Craig, Fiona L.

    2012-01-01

    There is increasing emphasis on the use routine outcome assessment measures to inform quality assurance initiatives. The calculation of reliable and clinically significant change indices is one strategy that organizations could use to develop both internal and externally focused benchmarking processes. The current study aimed to develop reliable…

  4. Benefits of e-Learning Benchmarks: Australian Case Studies

    ERIC Educational Resources Information Center

    Choy, Sarojni

    2007-01-01

    In 2004 the Australian Flexible Learning Framework developed a suite of quantitative and qualitative indicators on the uptake, use and impact of e-learning in the Vocational Education and Training (VET) sector. These indicators were used to design items for a survey to gather quantitative data for benchmarking. A series of four surveys gathered…

  5. Benchmarking Reference Desk Service in Academic Health Science Libraries: A Preliminary Survey.

    ERIC Educational Resources Information Center

    Robbins, Kathryn; Daniels, Kathleen

    2001-01-01

    This preliminary study was designed to benchmark patron perceptions of reference desk services at academic health science libraries, using a standard questionnaire. Responses were compared to determine the library that provided the highest-quality service overall and along five service dimensions. All libraries were rated very favorably, but none…

  6. Academic Achievement and Extracurricular School Activities of At-Risk High School Students

    ERIC Educational Resources Information Center

    Marchetti, Ryan; Wilson, Randal H.; Dunham, Mardis

    2016-01-01

    This study compared the employment, extracurricular participation, and family structure status of students from low socioeconomic families that achieved state-approved benchmarks on ACT reading and mathematics tests to those that did not achieve the benchmarks. Free and reduced lunch eligibility was used to determine SES. Participants included 211…

  7. Taking the Lead in Science Education: Forging Next-Generation Science Standards. International Science Benchmarking Report. Appendix

    ERIC Educational Resources Information Center

    Achieve, Inc., 2010

    2010-01-01

    This appendix accompanies the report "Taking the Lead in Science Education: Forging Next-Generation Science Standards. International Science Benchmarking Report," a study conducted by Achieve to compare the science standards of 10 countries. This appendix includes the following: (1) PISA and TIMSS Assessment Rankings; (2) Courses and…

  8. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    ERIC Educational Resources Information Center

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  9. Taking Aims: New CASE Study Benchmarks Advancement Investments and Returns

    ERIC Educational Resources Information Center

    Goldsmith, Rae

    2012-01-01

    Advancement professionals have always been thirsty for information that will help them understand how their programs compare with those of their peers. But in recent years the demand for benchmarking data has exploded as budgets have become leaner, leaders have become more business minded, and terms like "performance metrics and return on…

  10. Can Human Capital Metrics Effectively Benchmark Higher Education with For-Profit Companies?

    ERIC Educational Resources Information Center

    Hagedorn, Kathy; Forlaw, Blair

    2007-01-01

    Last fall, Saint Louis University participated in St. Louis, Missouri's, first Human Capital Performance Study alongside several of the region's largest for-profit employers. The university also participated this year in the benchmarking of employee engagement factors conducted by the St. Louis Business Journal in its effort to quantify and select…

  11. Avoiding Pitfalls in the Use of the Benchmark Dose Approach to Chemical Risk Assessments; Some Illustrative Case Studies (Presentation)

    EPA Science Inventory

    The USEPA's benchmark dose software (BMDS) version 1.2 has been available over the Internet since April, 2000 (epa.gov/ncea/bmds.htm), and has already been used in risk assessments of some significant environmental pollutants (e.g., diesel exhaust, dichloropropene, hexachlorocycl...

  12. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  13. Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.

    PubMed

    Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George

    2018-05-01

    This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.

  14. Night-time naturally ventilated offices: Statistical simulations of window-use patterns from field monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Geun Young; Steemers, Koen

    2010-07-15

    This paper investigates occupant behaviour of window-use in night-time naturally ventilated offices on the basis of a pilot field study, conducted during the summers of 2006 and 2007 in Cambridge, UK, and then demonstrates the effects of employing night-time ventilation on indoor thermal conditions using predictive models of occupant window-use. A longitudinal field study shows that occupants make good use of night-time natural ventilation strategies when provided with openings that allow secure ventilation, and that there is a noticeable time of day effect in window-use patterns (i.e. increased probability of action on arrival and departure). We develop logistic models ofmore » window-use for night-time naturally ventilated offices, which are subsequently applied to a behaviour algorithm, including Markov chains and Monte Carlo methods. The simulations using the behaviour algorithm demonstrate a good agreement with the observational data of window-use, and reveal how building design and occupant behaviour collectively affect the thermal performance of offices. They illustrate that the provision of secure ventilation leads to more frequent use of the window, and thus contributes significantly to the achievement of a comfortable indoor environment during the daytime occupied period. For example, the maximum temperature for a night-time ventilated office is found to be 3 C below the predicted value for a daytime-only ventilated office. (author)« less

  15. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  16. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  17. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  18. Local electronic and optical behavior of ELO a-plane GaN

    NASA Astrophysics Data System (ADS)

    Baski, A. A.; Moore, J. C.; Ozgur, U.; Kasliwal, V.; Ni, X.; Morkoc, H.

    2007-03-01

    Conductive atomic force microscopy (CAFM) and near-field optical microscopy (NSOM) were used to study a-plane GaN films grown via epitaxial lateral overgrowth (ELO). The ELO films were prepared by metal organic chemical vapor deposition on a patterned SiO2 layer with 4-μm wide windows, which was deposited on a GaN template grown on r-plane sapphire. The window regions of the coalesced ELO films appear as depressions with a high density of surface pits. At reverse bias below 12 V, very low uniform conduction (2 pA) is seen in the window regions. Above 20 V, a lower-quality sample shows localized sites inside the window regions with significant leakage, indicating a correlation between the presence of surface pits and leakage sites. Room temperature NSOM studies also suggest a greater density of surface terminated dislocations in the window regions, while wing regions explicitly show enhanced optical quality of the overgrown GaN. The combination of CAFM and NSOM data therefore indicates a correlation between the presence of surface pits, localized reverse-bias current leakage, and low PL intensity in the window regions.

  19. The adenosine triphosphate test is a rapid and reliable audit tool to assess manual cleaning adequacy of flexible endoscope channels.

    PubMed

    Alfa, Michelle J; Fatima, Iram; Olson, Nancy

    2013-03-01

    The study objective was to verify that the adenosine triphosphate (ATP) benchmark of <200 relative light units (RLUs) was achievable in a busy endoscopy clinic that followed the manufacturer's manual cleaning instructions. All channels from patient-used colonoscopes (20) and duodenoscopes (20) in a tertiary care hospital endoscopy clinic were sampled after manual cleaning and tested for residual ATP. The ATP test benchmark for adequate manual cleaning was set at <200 RLUs. The benchmark for protein was <6.4 μg/cm(2), and, for bioburden, it was <4-log10 colony-forming units/cm(2). Our data demonstrated that 96% (115/120) of channels from 20 colonoscopes and 20 duodenoscopes evaluated met the ATP benchmark of <200 RLUs. The 5 channels that exceeded 200 RLUs were all elevator guide-wire channels. All 120 of the manually cleaned endoscopes tested had protein and bioburden levels that were compliant with accepted benchmarks for manual cleaning for suction-biopsy, air-water, and auxiliary water channels. Our data confirmed that, by following the endoscope manufacturer's manual cleaning recommendations, 96% of channels in gastrointestinal endoscopes would have <200 RLUs for the ATP test kit evaluated and would meet the accepted clean benchmarks for protein and bioburden. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  20. A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi

    2007-01-01

    A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.

  1. Cross-modal integration of polyphonic characters in Chinese audio-visual sentences: a MVPA study based on functional connectivity.

    PubMed

    Zhang, Zhengyi; Zhang, Gaoyan; Zhang, Yuanyuan; Liu, Hong; Xu, Junhai; Liu, Baolin

    2017-12-01

    This study aimed to investigate the functional connectivity in the brain during the cross-modal integration of polyphonic characters in Chinese audio-visual sentences. The visual sentences were all semantically reasonable and the audible pronunciations of the polyphonic characters in corresponding sentences contexts varied in four conditions. To measure the functional connectivity, correlation, coherence and phase synchronization index (PSI) were used, and then multivariate pattern analysis was performed to detect the consensus functional connectivity patterns. These analyses were confined in the time windows of three event-related potential components of P200, N400 and late positive shift (LPS) to investigate the dynamic changes of the connectivity patterns at different cognitive stages. We found that when differentiating the polyphonic characters with abnormal pronunciations from that with the appreciate ones in audio-visual sentences, significant classification results were obtained based on the coherence in the time window of the P200 component, the correlation in the time window of the N400 component and the coherence and PSI in the time window the LPS component. Moreover, the spatial distributions in these time windows were also different, with the recruitment of frontal sites in the time window of the P200 component, the frontal-central-parietal regions in the time window of the N400 component and the central-parietal sites in the time window of the LPS component. These findings demonstrate that the functional interaction mechanisms are different at different stages of audio-visual integration of polyphonic characters.

  2. Pediatric falls from windows and balconies: incidents and risk factors as reported by newspapers in the United Arab Emirates.

    PubMed

    Grivna, Michal; Al-Marzouqi, Hanan M; Al-Ali, Maryam R; Al-Saadi, Nada N; Abu-Zidan, Fikri M

    2017-01-01

    Falls of children from heights (balconies and windows) usually result in severe injuries and death. Details on child falls from heights in the United Arab Emirates (UAE) are not easily accessible. Our aim was to assess the incidents, personal, and environmental risk factors for pediatric falls from windows/balconies using newspaper clippings. We used a retrospective study design to electronically assess all major UAE national Arabic and English newspapers for reports of unintentional child falls from windows and balconies during 2005-2016. A structured data collection form was developed to collect information. Data were entered into an Excel sheet and descriptive analysis was performed. Newspaper clippings documented 96 fall incidents. After cleaning the data and excluding duplicate cases and intentional injuries, 81 cases were included into the final analysis. Fifty-three percent ( n  = 42) were boys. The mean (range) age was 4.9 years (1-15). Thirty-eight (47%) children fell from windows and 36 (44%) from balconies. Twenty-two (27%) children climbed on the furniture placed on a balcony or close to a window. Twenty-five (31%) children were not alone in the apartment when they fell. Twenty-nine children fell from less than 5 floors (37%), 33 from 5 to 10 floors (42%) and 16 from more than 10 floors (21%) . Fifteen children (19%) were hospitalized and survived the fall incident, while 66 died (81%). Newspapers proved to be useful to study pediatric falls from heights. It is necessary to improve window safety by installing window guards and raising awareness.

  3. Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.

    PubMed

    Belloir, C; Stanford, C; Soares, A

    2015-01-01

    Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.

  4. Benchmarking working conditions for health and safety in the frontline healthcare industry: Perspectives from Australia and Malaysia.

    PubMed

    McLinton, Sarven S; Loh, May Young; Dollard, Maureen F; Tuckey, Michelle M R; Idris, Mohd Awang; Morton, Sharon

    2018-04-06

    To present benchmarks for working conditions in healthcare industries as an initial effort into international surveillance. The healthcare industry is fundamental to sustaining the health of Australians, yet it is under immense pressure. Budgets are limited, demands are increasing as are workplace injuries and all of these factors compromise patient care. Urgent attention is needed to reduce strains on workers and costs in health care, however, little work has been done to benchmark psychosocial factors in healthcare working conditions in the Asia-Pacific. Intercultural comparisons are important to provide an evidence base for public policy. A cross-sectional design was used (like other studies of prevalence), including a mixed-methods approach with qualitative interviews to better contextualize the results. Data on psychosocial factors and other work variables were collected from healthcare workers in three hospitals in Australia (N = 1,258) and Malaysia (N = 1,125). 2015 benchmarks were calculated for each variable and comparison was conducted via independent samples t tests. Healthcare samples were also compared with benchmarks for non-healthcare general working populations from their respective countries: Australia (N = 973) and Malaysia (N = 225). Our study benchmarks healthcare working conditions in Australia and Malaysia against the general working population, identifying trends that indicate the industry is in need of intervention strategies and job redesign initiatives that better support psychological health and safety. We move toward a better understanding of the precursors of psychosocial safety climate in a broader context, including similarities and differences between Australia and Malaysia in national culture, government occupational health and safety policies and top-level management practices. © 2018 John Wiley & Sons Ltd.

  5. Expectations of clinical teachers and faculty regarding development of the CanMEDS-Family Medicine competencies: Laval developmental benchmarks scale for family medicine residency training.

    PubMed

    Lacasse, Miriam; Théorêt, Johanne; Tessier, Sylvie; Arsenault, Louise

    2014-01-01

    The CanMEDS-Family Medicine (CanMEDS-FM) framework defines the expected terminal enabling competencies (EC) for family medicine (FM) residency training in Canada. However, benchmarks throughout the 2-year program are not yet defined. This study aimed to identify expected time frames for achievement of the CanMEDS-FM competencies during FM residency training and create a developmental benchmarks scale for family medicine residency training. This 2011-2012 study followed a Delphi methodology. Selected faculty and clinical teachers identified, via questionnaire, the expected time of EC achievement from beginning of residency to one year in practice (0, 6, 12, […] 36 months). The 15-85th percentile intervals became the expected competency achievement interval. Content validity of the obtained benchmarks was assessed through a second Delphi round. The 1st and 2nd rounds were completed by 33 and 27 respondents, respectively. A developmental benchmarks scale was designed after the 1st round to illustrate expectations regarding achievement of each EC. The 2nd round (content validation) led to minor adjustments (1.9±2.7 months) of intervals for 44 of the 92 competencies, the others remaining unchanged. The Laval Developmental Benchmarks Scale for Family Medicine clarifies expectations regarding achievement of competencies throughout FM training. In a competency-based education system this now allows identification and management of outlying residents, both those excelling and needing remediation. Further research should focus on assessment of the scale reliability after pilot implementation in family medicine clinical teaching units at Laval University, and corroborate the established timeline in other sites.

  6. The demographic impact and development benefits of meeting demand for family planning with modern contraceptive methods.

    PubMed

    Goodkind, Daniel; Lollock, Lisa; Choi, Yoonjoung; McDevitt, Thomas; West, Loraine

    2018-01-01

    Meeting demand for family planning can facilitate progress towards all major themes of the United Nations Sustainable Development Goals (SDGs): people, planet, prosperity, peace, and partnership. Many policymakers have embraced a benchmark goal that at least 75% of the demand for family planning in all countries be satisfied with modern contraceptive methods by the year 2030. This study examines the demographic impact (and development implications) of achieving the 75% benchmark in 13 developing countries that are expected to be the furthest from achieving that benchmark. Estimation of the demographic impact of achieving the 75% benchmark requires three steps in each country: 1) translate contraceptive prevalence assumptions (with and without intervention) into future fertility levels based on biometric models, 2) incorporate each pair of fertility assumptions into separate population projections, and 3) compare the demographic differences between the two population projections. Data are drawn from the United Nations, the US Census Bureau, and Demographic and Health Surveys. The demographic impact of meeting the 75% benchmark is examined via projected differences in fertility rates (average expected births per woman's reproductive lifetime), total population, growth rates, age structure, and youth dependency. On average, meeting the benchmark would imply a 16 percentage point increase in modern contraceptive prevalence by 2030 and a 20% decline in youth dependency, which portends a potential demographic dividend to spur economic growth. Improvements in meeting the demand for family planning with modern contraceptive methods can bring substantial benefits to developing countries. To our knowledge, this is the first study to show formally how such improvements can alter population size and age structure. Declines in youth dependency portend a demographic dividend, an added bonus to the already well-known benefits of meeting existing demands for family planning.

  7. A call for benchmarking transposable element annotation methods.

    PubMed

    Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu

    2015-01-01

    DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.

  8. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  9. The national hydrologic bench-mark network

    USGS Publications Warehouse

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  10. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  11. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  12. A method for visualizing high-density porous polyethylene (medpor, porex) with computed tomographic scanning.

    PubMed

    Vendemia, Nicholas; Chao, Jerry; Ivanidze, Jana; Sanelli, Pina; Spinelli, Henry M

    2011-01-01

    Medpor (Porex Surgical, Inc, Newnan, GA) is composed of porous polyethylene and is commonly used in craniofacial reconstruction. When complications such as seroma or abscess formation arise, diagnostic modalities are limited because Medpor is radiolucent on conventional radiologic studies. This poses a problem in situations where imaging is necessary to distinguish the implant from surrounding tissues. To present a clinically useful method for imaging Medpor with conventional computed tomographic (CT) scanning. Eleven patients (12 total implants) who have undergone reconstructive surgery with Medpor were included in the study. A retrospective review of CT scans done between 1 and 16 months postoperatively was performed using 3 distinct CT window settings. Measurements of implant dimensions and Hounsfield units were recorded and qualitatively assessed. Of the 3 distinct window settings studied, namely, "bone" (W1100/L450), "soft tissue"; (W500/L50), and "implant" (W800/L200), the implant window proved the most ideal, allowing the investigators to visualize and evaluate Medpor in all cases. Qualitative analysis revealed that Medpor implants were able to be distinguished from surrounding tissue in both the implant and soft tissue windows, with a density falling between that of fat and fluid. In 1 case, Medpor could not be visualized in the soft tissue window, although it could be visualized in the implant window. Quantitative analysis demonstrated a mean (SD) density of -38.7 (7.4) Hounsfield units. Medpor may be optimally visualized on conventional CT scans using the implant window settings W800/L200, which can aid in imaging Medpor and diagnosing implant-related complications.

  13. Measurement of stapes vibration in Human temporal bones by round window stimulation using a 3-coil transducer.

    PubMed

    Shin, Dong Ho; Kim, Dong Wook; Lim, Hyung Gyu; Jung, Eui Sung; Seong, Ki Woong; Lee, Jyung Hyun; Kim, Myoung Nam; Cho, Jin Ho

    2014-01-01

    Round window placement of a 3-coil transducer offers a new approach for coupling an implantable hearing aid to the inner ear. The transducer exhibits high performance at low-frequencies. One remarkable feature of the 3-coil transducer is that it minimizes leakage flux. Thus, the transducer, which consists of two permanent magnets and three coils, can enhance vibrational displacement. In human temporal bones, stapes vibration was observed by laser Doppler vibrometer in response to round window stimulation using the 3-coil transducer. Coupling between the 3-coil transducer and the round window was connected by a wire-rod. The stimulation created stapes velocity when the round window stimulated. Performance evaluation was conducted by measuring stapes velocity. To verify the performance of the 3-coil transducer, stapes velocity for round window and tympanic membrane stimulation were compared, respectively. Stapes velocity by round window stimulation using the 3-coil transducer was approximately 14 dB higher than that achieved by tympanic membrane stimulation. The study shows that 3-coil transducer is suitable for implantable hearing aids.

  14. Sliding-window analysis tracks fluctuations in amygdala functional connectivity associated with physiological arousal and vigilance during fear conditioning.

    PubMed

    Baczkowski, Blazej M; Johnstone, Tom; Walter, Henrik; Erk, Susanne; Veer, Ilya M

    2017-06-01

    We evaluated whether sliding-window analysis can reveal functionally relevant brain network dynamics during a well-established fear conditioning paradigm. To this end, we tested if fMRI fluctuations in amygdala functional connectivity (FC) can be related to task-induced changes in physiological arousal and vigilance, as reflected in the skin conductance level (SCL). Thirty-two healthy individuals participated in the study. For the sliding-window analysis we used windows that were shifted by one volume at a time. Amygdala FC was calculated for each of these windows. Simultaneously acquired SCL time series were averaged over time frames that corresponded to the sliding-window FC analysis, which were subsequently regressed against the whole-brain seed-based amygdala sliding-window FC using the GLM. Surrogate time series were generated to test whether connectivity dynamics could have occurred by chance. In addition, results were contrasted against static amygdala FC and sliding-window FC of the primary visual cortex, which was chosen as a control seed, while a physio-physiological interaction (PPI) was performed as cross-validation. During periods of increased SCL, the left amygdala became more strongly coupled with the bilateral insula and anterior cingulate cortex, core areas of the salience network. The sliding-window analysis yielded a connectivity pattern that was unlikely to have occurred by chance, was spatially distinct from static amygdala FC and from sliding-window FC of the primary visual cortex, but was highly comparable to that of the PPI analysis. We conclude that sliding-window analysis can reveal functionally relevant fluctuations in connectivity in the context of an externally cued task. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Benchmarking the Fundamental Electronic Properties of small TiO 2 Nanoclusters by GW and Coupled Cluster Theory Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berardo, Enrico; Kaplan, Ferdinand; Bhaskaran-Nair, Kiran

    We study the vertical ionisation potential, electron affinity, fundamental gap and exciton binding energy values of small bare and hydroxylated TiO 2 nanoclusters to understand how the excited state properties change as a function of size and hydroxylation. In addition, we have employed a range of many-body methods; including G 0 W 0, qs GW, EA/IP-EOM-CCSD and DFT (B3LYP, PBE), to compare the performance and predictions of the different classes of methods. We demonstrate that for bare (i.e. non-hydroxylated) clusters all many-body methods predict the same trend with cluster size. The highest occupied and lowest unoccupied DFT orbitals follow themore » same trends as the electron affinity and ionisation potentials predicted by the many-body methods but are generally far too shallow and deep respectively in absolute terms. In contrast, the ΔDFT method is found to yield values in the correct energy window. However, its predictions depend on the functional used and do not necessarily follow trends based on the many-body methods. The effect of hydroxylation of the clusters is to open up both the optical and fundamental gap. In conclusion, a simple microscopic explanation for the observed trends with cluster size and upon hydroxylation is proposed in terms of the Madelung onsite potential.« less

  16. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  17. Flow velocity vector fields by ultrasound particle imaging velocimetry: in vitro comparison with optical flow velocimetry.

    PubMed

    Westerdale, John; Belohlavek, Marek; McMahon, Eileen M; Jiamsripong, Panupong; Heys, Jeffrey J; Milano, Michele

    2011-02-01

    We performed an in vitro study to assess the precision and accuracy of particle imaging velocimetry (PIV) data acquired using a clinically available portable ultrasound system via comparison with stereo optical PIV. The performance of ultrasound PIV was compared with optical PIV on a benchmark problem involving vortical flow with a substantial out-of-plane velocity component. Optical PIV is capable of stereo image acquisition, thus measuring out-of-plane velocity components. This allowed us to quantify the accuracy of ultrasound PIV, which is limited to in-plane acquisition. The system performance was assessed by considering the instantaneous velocity fields without extracting velocity profiles by spatial averaging. Within the 2-dimensional correlation window, using 7 time-averaged frames, the vector fields were found to have correlations of 0.867 in the direction along the ultrasound beam and 0.738 in the perpendicular direction. Out-of-plane motion of greater than 20% of the in-plane vector magnitude was found to increase the SD by 11% for the vectors parallel to the ultrasound beam direction and 8.6% for the vectors perpendicular to the beam. The results show a close correlation and agreement of individual velocity vectors generated by ultrasound PIV compared with optical PIV. Most of the measurement distortions were caused by out-of-plane velocity components.

  18. Benchmarking the Fundamental Electronic Properties of small TiO 2 Nanoclusters by GW and Coupled Cluster Theory Calculations

    DOE PAGES

    Berardo, Enrico; Kaplan, Ferdinand; Bhaskaran-Nair, Kiran; ...

    2017-06-19

    We study the vertical ionisation potential, electron affinity, fundamental gap and exciton binding energy values of small bare and hydroxylated TiO 2 nanoclusters to understand how the excited state properties change as a function of size and hydroxylation. In addition, we have employed a range of many-body methods; including G 0 W 0, qs GW, EA/IP-EOM-CCSD and DFT (B3LYP, PBE), to compare the performance and predictions of the different classes of methods. We demonstrate that for bare (i.e. non-hydroxylated) clusters all many-body methods predict the same trend with cluster size. The highest occupied and lowest unoccupied DFT orbitals follow themore » same trends as the electron affinity and ionisation potentials predicted by the many-body methods but are generally far too shallow and deep respectively in absolute terms. In contrast, the ΔDFT method is found to yield values in the correct energy window. However, its predictions depend on the functional used and do not necessarily follow trends based on the many-body methods. The effect of hydroxylation of the clusters is to open up both the optical and fundamental gap. In conclusion, a simple microscopic explanation for the observed trends with cluster size and upon hydroxylation is proposed in terms of the Madelung onsite potential.« less

  19. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  20. Absorption of solar radiation by alkali vapors. [for efficient high temperature energy converters

    NASA Technical Reports Server (NTRS)

    Mattick, A. T.

    1978-01-01

    A theoretical study of the direct absorption of solar radiation by the working fluid of high temperature, high efficiency energy converters has been carried out. Alkali vapors and potassium vapor in particular were found to be very effective solar absorbers and suitable thermodynamically for practical high temperature cycles. Energy loss via reradiation from a solar boiler was shown to reduce the overall efficiency of radiation-heated energy converters, although a simple model of radiation transfer in a potassium vapor solar boiler revealed that self-trapping of the reradiation may reduce this loss considerably. A study was also made of the requirements for a radiation boiler window. It was found that for sapphire, one of the best solar transmitting materials, the severe environment in conjunction with high radiation densities will require some form of window protection. An aerodynamic shield is particularly advantageous in this capacity, separating the window from the absorbing vapor to prevent condensation and window corrosion and to reduce the radiation density at the window.

  1. In-car countermeasures open window and music revisited on the real road: popular but hardly effective against driver sleepiness.

    PubMed

    Schwarz, Johanna F A; Ingre, Michael; Fors, Carina; Anund, Anna; Kecklund, Göran; Taillard, Jacques; Philip, Pierre; Åkerstedt, Torbjörn

    2012-10-01

    This study investigated the effects of two very commonly used countermeasures against driver sleepiness, opening the window and listening to music, on subjective and physiological sleepiness measures during real road driving. In total, 24 individuals participated in the study. Sixteen participants received intermittent 10-min intervals of: (i) open window (2 cm opened); and (ii) listening to music, during both day and night driving on an open motorway. Both subjective sleepiness and physiological sleepiness (blink duration) was estimated to be significantly reduced when subjects listened to music, but the effect was only minor compared with the pronounced effects of night driving and driving duration. Open window had no attenuating effect on either sleepiness measure. No significant long-term effects beyond the actual countermeasure application intervals occurred, as shown by comparison to the control group (n = 8). Thus, despite their popularity, opening the window and listening to music cannot be recommended as sole countermeasures against driver sleepiness. © 2012 European Sleep Research Society.

  2. Rashba-Zeeman-effect-induced spin filtering energy windows in a quantum wire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Xianbo, E-mail: xxb-11@hotmail.com; Nie, Wenjie; Chen, Zhaoxia

    2014-06-14

    We perform a numerical study on the spin-resolved transport in a quantum wire (QW) under the modulation of both Rashba spin-orbit coupling (SOC) and a perpendicular magnetic field by using the developed Usuki transfer-matrix method in combination with the Landauer-Büttiker formalism. Wide spin filtering energy windows can be achieved in this system for unpolarized spin injection. In addition, both the width of energy window and the magnitude of spin conductance within these energy windows can be tuned by varying Rashba SOC strength, which can be apprehended by analyzing the energy dispersions and spin-polarized density distributions inside the QW, respectively. Furthermore » study also demonstrates that these Rashba-SOC-controlled spin filtering energy windows show a strong robustness against disorders. These findings may not only benefit to further understand the spin-dependent transport properties of a QW in the presence of external fields but also provide a theoretical instruction to design a spin filter device.« less

  3. Developing Benchmarks for Solar Radio Bursts

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  4. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark application.« less

  5. Groundwater-quality data in the Santa Barbara study unit, 2011: results from the California GAMA Program

    USGS Publications Warehouse

    Davis, Tracy A.; Kulongoski, Justin T.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 48-square-mile Santa Barbara study unit was investigated by the U.S. Geological Survey (USGS) from January to February 2011, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The Santa Barbara study unit was the thirty-fourth study unit to be sampled as part of the GAMA-PBP. The GAMA Santa Barbara study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as those parts of the aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the Santa Barbara study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the Santa Barbara study unit located in Santa Barbara and Ventura Counties, groundwater samples were collected from 24 wells. Eighteen of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and six wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds); constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]); naturally occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and arsenic, chromium, and iron species); and radioactive constituents (radon-222 and gross alpha and gross beta radioactivity). Naturally occurring isotopes (stable isotopes of hydrogen and oxygen in water, stables isotopes of inorganic carbon and boron dissolved in water, isotope ratios of dissolved strontium, tritium activities, and carbon-14 abundances) and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 281 constituents and water-quality indicators were measured. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 12 percent of the wells in the Santa Barbara study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 82 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 18 grid wells in the Santa Barbara study unit were detected at concentrations less than drinking-water benchmarks. Of the 220 organic and special-interest constituents sampled for at the 18 grid wells, 13 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and non-regulatory health-based benchmarks. In total, VOCs were detected in 61 percent of the 18 grid wells sampled, pesticides and pesticide degradates were detected in 11 percent, and perchlorate was detected in 67 percent. Polar pesticides and their degradates, pharmaceutical compounds, and NDMA were not detected in any of the grid wells sampled in the Santa Barbara study unit. Eighteen grid wells were sampled for trace elements, major and minor ions, nutrients, and radioactive constituents; most detected concentrations were less than health-based benchmarks. Exceptions are one detection of boron greater than the CDPH notification level (NL-CA) of 1,000 micrograms per liter (μg/L) and one detection of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L). Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in three grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in seven grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in four grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in eight grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 17 grid wells, and concentrations in six of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  6. Determining the frequency of open windows in motor vehicles: a pilot study using a video camera in Houston, Texas during high temperature conditions.

    PubMed

    Long, Tom; Johnson, Ted; Ollison, Will

    2002-05-01

    Researchers have developed a variety of computer-based models to estimate population exposure to air pollution. These models typically estimate exposures by simulating the movement of specific population groups through defined microenvironments. Exposures in the motor vehicle microenvironment are significantly affected by air exchange rate, which in turn is affected by vehicle speed, window position, vent status, and air conditioning use. A pilot study was conducted in Houston, Texas, during September 2000 for a specific set of weather, vehicle speed, and road type conditions to determine whether useful information on the position of windows, sunroofs, and convertible tops could be obtained through the use of video cameras. Monitoring was conducted at three sites (two arterial roads and one interstate highway) on the perimeter of Harris County located in or near areas not subject to mandated Inspection and Maintenance programs. Each site permitted an elevated view of vehicles as they proceeded through a turn, thereby exposing all windows to the stationary video camera. Five videotaping sessions were conducted over a two-day period in which the Heat Index (HI)-a function of temperature and humidity-varied from 80 to 101 degrees F and vehicle speed varied from 30 to 74 mph. The resulting videotapes were processed to create a master database listing vehicle-specific data for site location, date, time, vehicle type (e.g., minivan), color, window configuration (e.g., four windows and sunroof), number of windows in each of three position categories (fully open, partially open, and closed), HI, and speed. Of the 758 vehicles included in the database, 140 (18.5 percent) were labeled as "open," indicating a window, sunroof, or convertible top was fully or partially open. The results of a series of stepwise linear regression analyses indicated that the probability of a vehicle in the master database being "open" was weakly affected by time of day, vehicle type, vehicle color, vehicle speed, and HI. In particular, open windows occurred more frequently when vehicle speed was less than 50 mph during periods when HI exceeded 99.9 degrees F and the vehicle was a minivan or passenger van. Overall, the pilot study demonstrated that data on factors affecting vehicle window position could be acquired through a relatively simple experimental protocol using a single video camera. Limitations of the study requiring further research include the inability to determine the status of the vehicle air conditioning system; lack of a wide range of weather, vehicle speed, and road type conditions; and the need to exclude some vehicles from statistical analyses due to ambiguous window positions.

  7. Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken

    2005-01-01

    The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.

  8. Torsion-rotation intensities in methanol

    NASA Astrophysics Data System (ADS)

    Pearson, John

    Methanol exists in numerous kinds of astronomical objects featuring a wide range of local conditions. The light nature of the molecule coupled with the internal rotation of the methyl group with respect to the hydroxyl group results in a rich, strong spectrum that spans the entire far-infrared region. As a result, any modest size observational window will have a number of strong methanol transitions. This has made it the gas of choice for testing THz receivers and to extract the local physical conditions from observations covering small frequency windows. The latter has caused methanol to be dubbed the Swiss army knife of astrophysics. Methanol has been increasingly used in this capacity and will be used even more for subsequent investigations into the Herschel archive, and with SOFIA and ALMA. Interpreting physical conditions on the basis of a few methanol lines requires that the molecular data, line positions, intensities, and collision rates, be complete, consistent and accurate to a much higher level than previously required for astrophysics. The need for highly reliable data is even more critical for modeling the two classes of widespread maser action and many examples of optical pumping through the torsional bands. Observation of the torsional bands in the infrared will be a unique opportunity to directly connect JWST observations with those of Herschel, SOFIA, and ALMA. The theory for the intensities of torsion-rotation transitions in a molecule featuring a single internally rotating methyl group is well developed after 70 years of research. However, other than a recent very preliminary and not completely satisfactory investigation of a few CH3OH torsional bands, this theory has never been experimentally tested for any C3V internal rotor. More alarming is a set of recent intensity calibrated microwave measurements that showed deviations relative to calculations of up to 50% in some ground state rotational transitions commonly used by astronomers to extract local conditions. We propose a comprehensive study of the intensities of methanol involving both the pure rotation bands and the torsional bands to serve as a benchmark for the theory used to calculate the infrared activity of all single methyl internal rotation molecules.

  9. Investigating the Impact of Maternal Residential Mobility on Identifying Critical Windows of Susceptibility to Ambient Air Pollution During Pregnancy.

    PubMed

    Warren, Joshua L; Son, Ji-Young; Pereira, Gavin; Leaderer, Brian P; Bell, Michelle L

    2018-05-01

    Identifying periods of increased vulnerability to air pollution during pregnancy with respect to the development of adverse birth outcomes can improve understanding of possible mechanisms of disease development and provide guidelines for protection of the child. Exposure to air pollution during pregnancy is typically based on the mother's residence at delivery, potentially resulting in exposure misclassification and biasing the estimation of critical windows of pregnancy. In this study, we determined the impact of maternal residential mobility during pregnancy on defining weekly exposure to particulate matter less than or equal to 10 μm in aerodynamic diameter (PM10) and estimating windows of susceptibility to term low birth weight. We utilized data sets from 4 Connecticut birth cohorts (1988-2008) that included information on all residential addresses between conception and delivery for each woman. We designed a simulation study to investigate the impact of increasing levels of mobility on identification of critical windows. Increased PM10 exposure during pregnancy weeks 16-18 was associated with an increased probability of term low birth weight. Ignoring residential mobility when defining weekly exposure had only a minor impact on the identification of critical windows for PM10 and term low birth weight in the data application and simulation study. Identification of critical pregnancy windows was robust to exposure misclassification caused by ignoring residential mobility in these Connecticut birth cohorts.

  10. Smart window using a thermally and optically switchable liquid crystal cell

    NASA Astrophysics Data System (ADS)

    Oh, Seung-Won; Kim, Sang-Hyeok; Baek, Jong-Min; Yoon, Tae-Hoon

    2018-02-01

    Light shutter technologies that can control optical transparency have been studied extensively for developing curtain-free smart windows. We introduce thermally and optically switchable light shutters using LCs doped with push-pull azobenzene, which is known to speed up thermal relaxation. The liquid crystal light shutter can be switched between translucent and transparent states or transparent and opaque states by phase transition through changing temperature or photo-isomerization of doped azobenzene. The liquid crystal light shutter can be used for privacy windows with an initial translucent state or energy-saving windows with an initial transparent state.

  11. Benchmarking Outcomes in the Critically Injured Burn Patient

    PubMed Central

    Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.

    2014-01-01

    Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222

  12. Gestational age specific neonatal survival in the State of Qatar (2003-2008) - a comparative study with international benchmarks.

    PubMed

    Rahman, Sajjad; Salameh, Khalil; Al-Rifai, Hilal; Masoud, Ahmed; Lutfi, Samawal; Salama, Husam; Abdoh, Ghassan; Omar, Fahmi; Bener, Abdulbari

    2011-09-01

    To analyze and compare the current gestational age specific neonatal survival rates between Qatar and international benchmarks. An analytical comparative study. Women's Hospital, Hamad Medical Corporation, Doha, Qatar, from 2003-2008. Six year's (2003-2008) gestational age specific neonatal mortality data was stratified for each completed week of gestation at birth from 24 weeks till term. The data from World Health Statistics by WHO (2010), Vermont Oxford Network (VON, 2007) and National Statistics United Kingdom (2006) were used as international benchmarks for comparative analysis. A total of 82,002 babies were born during the study period. Qatar's neonatal mortality rate (NMR) dropped from 6/1000 in 2003 to 4.3/1000 in 2008 (p < 0.05). The overall and gestational age specific neonatal mortality rates of Qatar were comparable with international benchmarks. The survival of < 27 weeks and term babies was better in Qatar (p=0.01 and p < 0.001 respectively) as compared to VON. The survival of > 32 weeks babies was better in UK (p=0.01) as compared to Qatar. The relative risk (RR) of death decreased with increasing gestational age (p < 0.0001). Preterm babies (45%) followed by lethal chromosomal and congenital anomalies (26.5%) were the two leading causes of neonatal deaths in Qatar. The current total and gestational age specific neonatal survival rates in the State of Qatar are comparable with international benchmarks. In Qatar, persistently high rates of low birth weight and lethal chromosomal and congenital anomalies significantly contribute towards neonatal mortality.

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  15. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  16. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  17. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  18. Using benchmarking techniques and the 2011 maternity practices infant nutrition and care (mPINC) survey to improve performance among peer groups across the United States.

    PubMed

    Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M

    2014-02-01

    A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.

  19. Groundwater quality data in 15 GAMA study units: results from the 2006–10 Initial sampling and the 2009–13 resampling of wells, California GAMA Priority Basin Project

    USGS Publications Warehouse

    Kent, Robert

    2015-08-31

    Most constituents that were detected in groundwater samples from the trend wells were found at concentrations less than drinking-water benchmarks. Two volatile organic compounds (VOCs)—tetrachloroethene and trichloroethene—were detected in samples from one or more wells at concentrations greater than their health-based benchmarks, and three VOCs—chloroform, tetrachloroethene, and trichloroethene—were detected in at least 10 percent of the trend-well samples from the initial sampling period and the later trend sampling period. No pesticides were detected at concentrations near or greater than their health-based benchmarks. Three pesticide constituents—atrazine, deethylatrazine, and simazine—were detected in more than 10 percent of the trend-well samples in both sampling periods. Perchlorate, a constituent of special interest, was detected at a concentration greater than its health-based benchmark in samples from one trend well in the initial sampling and trend sampling periods, and in an additional trend well sample only in the trend sampling period. Most detections of nutrients, major and minor ions, and trace elements in samples from trend wells were less than health-based benchmarks in both sampling periods. Exceptions included nitrate, fluoride, arsenic, boron, molybdenum, strontium, and uranium; these were all detected at concentrations greater than their health-based benchmarks in at least one well sample in both sampling periods. Lead and vanadium were detected above their health-based benchmarks in one sample each collected in the initial sampling period only. The isotopic ratios of oxygen and hydrogen in water and the activities of tritium and carbon-14 generally changed little between sampling periods.

  20. Benchmarking in pathology: development of an activity-based costing model.

    PubMed

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  1. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.

  2. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  3. Benchmarking study of corporate research management and planning practices

    NASA Astrophysics Data System (ADS)

    McIrvine, Edward C.

    1992-05-01

    During 1983-84, Xerox Corporation was undergoing a change in corporate style through a process of training and altered behavior known as Leadership Through Quality. One tenet of Leadership Through Quality was benchmarking, a procedure whereby all units of the corporation were asked to compare their operation with the outside world. As a part of the first wave of benchmark studies, Xerox Corporate Research Group studied the processes of research management, technology transfer, and research planning in twelve American and Japanese companies. The approach taken was to separate `research yield' and `research productivity' (as defined by Richard Foster) and to seek information about how these companies sought to achieve high- quality results in these two parameters. The most significant findings include the influence of company culture, two different possible research missions (an innovation resource and an information resource), and the importance of systematic personal interaction between sources and targets of technology transfer.

  4. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  5. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  6. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  7. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  8. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  9. 42 CFR 440.390 - Assurance of transportation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...

  10. An Exploration of the Gap between Highest and Lowest Ability Readers across 20 Countries

    ERIC Educational Resources Information Center

    Alivernini, Fabio

    2013-01-01

    The aim of the present study, based on data from 20 countries, is to identify the pattern of variables (at country, school and student levels), which are typical of students performing below the low international benchmark compared to students performing at the advanced performance benchmark, in the Progress in International Reading Literacy Study…

  11. Developing Student Character through Disciplinary Curricula: An Analysis of UK QAA Subject Benchmark Statements

    ERIC Educational Resources Information Center

    Quinlan, Kathleen M.

    2016-01-01

    What aspects of student character are expected to be developed through disciplinary curricula? This paper examines the UK written curriculum through an analysis of the Quality Assurance Agency's subject benchmark statements for the most popular subjects studied in the UK. It explores the language, principles and intended outcomes that suggest…

  12. Improving HEI Productivity and Performance through Project Management: Implications from a Benchmarking Case Study

    ERIC Educational Resources Information Center

    Bryde, David; Leighton, Diana

    2009-01-01

    As higher education institutions (HEIs) look to be more commercial in their outlook they are likely to become more dependent on the successful implementation of projects. This article reports a benchmarking survey of PM maturity in a HEI, with the purpose of assessing its capability to implement projects. Data were collected via questionnaires…

  13. Benchmarking Work Practices and Outcomes in Australian Universities Using an Employee Survey

    ERIC Educational Resources Information Center

    Langford, Peter H.

    2010-01-01

    The purpose of the current study was to benchmark a broad range of work practices and outcomes in Australian universities against other industries. Past research suggests occupational stress experienced by academic staff is worse than experienced by employees in other industries. However, no other practices or outcomes can be compared confidently.…

  14. A Qualitative Study of Prospective Elementary Teachers' Grasp of Agricultural and Science Educational Benchmarks for Agricultural Technology.

    ERIC Educational Resources Information Center

    Trexler, Cary J.; Meischen, Deanna

    2002-01-01

    Interviews with eight preservice elementary teachers regarding benchmarks related to agricultural technology for food and fiber showed that those from rural areas had more complex understanding of the trade-offs in technology use; urban residents were more concerned with ethical dilemmas. Pesticide pollution was most understood, genetic…

  15. Jamaica Higher Education: Utilizing the Benchmarks of Joint Board Teaching Practice at Church Teachers' College

    ERIC Educational Resources Information Center

    Rose, Hyacinth P.

    2010-01-01

    This article reports a descriptive case study portraying a teaching-practice program designed to highlight the preparation of student-teachers for teaching practice, using the Joint Board of Teacher Education (JBTE) benchmarks, in a teachers' college in Jamaica. At Church Teachers' College (CTC) 22 informants of mixed gender were selected for the…

  16. Workskills and National Competitiveness: A Benchmarking Framework. Report No. 1: Benchmarking Australian Qualification Profiles.

    ERIC Educational Resources Information Center

    Cullen, R. B.

    A recent study of work skill competitiveness and overall national competitiveness worldwide revealed that 17 countries are more competitive than Australia. Some countries have a relative resource advantage and will be able to extend access to education and training more effectively than Australia will, and some countries have targeted education…

  17. How Sound Is NSSE? Investigating the Psychometric Properties of NSSE at a Public, Research-Extensive Institution

    ERIC Educational Resources Information Center

    Campbell, Corbin M.; Cabrera, Alberto F.

    2011-01-01

    The National Survey of Student Engagement (NSSE) Benchmarks has emerged as a competing paradigm for assessing institutional effectiveness vis-a-vis the U.S. News & World Report. However, Porter (2009) has critiqued it for failing to meet validity and reliability standards. This study investigated whether the NSSE five benchmarks had construct…

  18. Effectiveness of Cognitive-Behavioral Therapy for Adolescent Depression: A Benchmarking Investigation

    ERIC Educational Resources Information Center

    Weersing, V. Robin; Iyengar, Satish; Kolko, David J.; Birmaher, Boris; Brent, David A.

    2006-01-01

    In this study, we examined the effectiveness of cognitive-behavioral therapy (CBT) for adolescent depression. Outcomes of 80 youth treated with CBT in an outpatient depression specialty clinic, the Services for Teens at Risk Center (STAR), were compared to a "gold standard" CBT research benchmark. On average, youths treated with CBT in STAR…

  19. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  20. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  1. Benchmarking in Academic Pharmacy Departments

    PubMed Central

    Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann

    2010-01-01

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251

  2. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  3. Counter tube window and X-ray fluorescence analyzer study

    NASA Technical Reports Server (NTRS)

    Hertel, R.; Holm, M.

    1973-01-01

    A study was performed to determine the best design tube window and X-ray fluorescence analyzer for quantitative analysis of Venusian dust and condensates. The principal objective of the project was to develop the best counter tube window geometry for the sensing element of the instrument. This included formulation of a mathematical model of the window and optimization of its parameters. The proposed detector and instrument has several important features. The instrument will perform a near real-time analysis of dust in the Venusian atmosphere, and is capable of measuring dust layers less than 1 micron thick. In addition, wide dynamic measurement range will be provided to compensate for extreme variations in count rates. An integral pulse-height analyzer and memory accumulate data and read out spectra for detail computer analysis on the ground.

  4. Personal exposure to fine particulate air pollution while commuting: An examination of six transport modes on an urban arterial roadway.

    PubMed

    Chaney, Robert A; Sloan, Chantel D; Cooper, Victoria C; Robinson, Daniel R; Hendrickson, Nathan R; McCord, Tyler A; Johnston, James D

    2017-01-01

    Traffic-related air pollution in urban areas contributes significantly to commuters' daily PM2.5 exposures, but varies widely depending on mode of commuting. To date, studies show conflicting results for PM2.5 exposures based on mode of commuting, and few studies compare multiple modes of transportation simultaneously along a common route, making inter-modal comparisons difficult. In this study, we examined breathing zone PM2.5 exposures for six different modes of commuting (bicycle, walking, driving with windows open and closed, bus, and light-rail train) simultaneously on a single 2.7 km (1.68 mile) arterial urban route in Salt Lake City, Utah (USA) during peak "rush hour" times. Using previously published minute ventilation rates, we estimated the inhaled dose and exposure rate for each mode of commuting. Mean PM2.5 concentrations ranged from 5.20 μg/m3 for driving with windows closed to 15.21 μg/m3 for driving with windows open. The estimated inhaled doses over the 2.7 km route were 6.83 μg for walking, 2.78 μg for cycling, 1.28 μg for light-rail train, 1.24 μg for driving with windows open, 1.23 μg for bus, and 0.32 μg for driving with windows closed. Similarly, the exposure rates were highest for cycling (18.0 μg/hr) and walking (16.8 μg/hr), and lowest for driving with windows closed (3.7 μg/hr). Our findings support previous studies showing that active commuters receive a greater PM2.5 dose and have higher rates of exposure than commuters using automobiles or public transportation. Our findings also support previous studies showing that driving with windows closed is protective against traffic-related PM2.5 exposure.

  5. Personal exposure to fine particulate air pollution while commuting: An examination of six transport modes on an urban arterial roadway

    PubMed Central

    Sloan, Chantel D.; Cooper, Victoria C.; Robinson, Daniel R.; Hendrickson, Nathan R.; McCord, Tyler A.; Johnston, James D.

    2017-01-01

    Traffic-related air pollution in urban areas contributes significantly to commuters’ daily PM2.5 exposures, but varies widely depending on mode of commuting. To date, studies show conflicting results for PM2.5 exposures based on mode of commuting, and few studies compare multiple modes of transportation simultaneously along a common route, making inter-modal comparisons difficult. In this study, we examined breathing zone PM2.5 exposures for six different modes of commuting (bicycle, walking, driving with windows open and closed, bus, and light-rail train) simultaneously on a single 2.7 km (1.68 mile) arterial urban route in Salt Lake City, Utah (USA) during peak “rush hour” times. Using previously published minute ventilation rates, we estimated the inhaled dose and exposure rate for each mode of commuting. Mean PM2.5 concentrations ranged from 5.20 μg/m3 for driving with windows closed to 15.21 μg/m3 for driving with windows open. The estimated inhaled doses over the 2.7 km route were 6.83 μg for walking, 2.78 μg for cycling, 1.28 μg for light-rail train, 1.24 μg for driving with windows open, 1.23 μg for bus, and 0.32 μg for driving with windows closed. Similarly, the exposure rates were highest for cycling (18.0 μg/hr) and walking (16.8 μg/hr), and lowest for driving with windows closed (3.7 μg/hr). Our findings support previous studies showing that active commuters receive a greater PM2.5 dose and have higher rates of exposure than commuters using automobiles or public transportation. Our findings also support previous studies showing that driving with windows closed is protective against traffic-related PM2.5 exposure. PMID:29121096

  6. Medical school benchmarking - from tools to programmes.

    PubMed

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  7. Leveraging Real-World Evidence in Disease-Management Decision-Making with a Total Cost of Care Estimator.

    PubMed

    Nguyen, Thanh-Nghia; Trocio, Jeffrey; Kowal, Stacey; Ferrufino, Cheryl P; Munakata, Julie; South, Dell

    2016-12-01

    Health management is becoming increasingly complex, given a range of care options and the need to balance costs and quality. The ability to measure and understand drivers of costs is critical for healthcare organizations to effectively manage their patient populations. Healthcare decision makers can leverage real-world evidence to explore the value of disease-management interventions in shifting total cost trends. To develop a real-world, evidence-based estimator that examines the impact of disease-management interventions on the total cost of care (TCoC) for a patient population with nonvalvular atrial fibrillation (NVAF). Data were collected from a patient-level real-world evidence data set that uses the IMS PharMetrics Health Plan Claims Database. Pharmacy and medical claims for patients meeting the inclusion or exclusion criteria were combined in longitudinal cohorts with a 180-day preindex and 360-day follow-up period. Descriptive statistics, such as mean and median patient costs and event rates, were derived from a real-world evidence analysis and were used to populate the base-case estimates within the TCoC estimator, an exploratory economic model that was designed to estimate the potential impact of several disease-management activities on the TCoC for a patient population with NVAF. Using Microsoft Excel, the estimator is designed to compare current direct costs of medical care to projected costs by varying assumptions on the impact of disease-management activities and applying the associated changes in cost trends to the affected populations. Disease-management levers are derived from literature-based concepts affecting costs along the NVAF disease continuum. The use of the estimator supports analyses across 4 US geographic regions, age, cost types, and care settings during 1 year. All patients included in the study were continuously enrolled in their health plan (within the IMS PharMetrics Health Plan Claims Database) between July 1, 2010, and June 30, 2012. Patients were included in the final analytic file and were indexed based on (1) the service date of the first claim within the selection window (December 28, 2010-July 11, 2011) with a diagnosis of NVAF, or (2) the service date of the second claim for an NVAF medication of interest during the same selection window. The model estimates the current trends in national benchmark data for a hypothetical health plan with 1 million covered lives. The annual total direct healthcare costs (allowable and patient out-of-pocket costs) of managing patients with NVAF in this hypothetical plan are estimated at $184,981,245 ($25,754 per patient, for 7183 patients). A potential 25% improvement from the base-case disease burden and disease management could translate into TCoC savings from reducing the excess costs related to hypertension (-5.3%) and supporting the use of an appropriate antithrombotic treatment that prevents ischemic stroke (-0.7%) and reduces bleeding events (-0.1%). The use of the TCoC estimator supports population health management by providing real-world evidence benchmark data on NVAF disease burden and by quantifying the potential value of disease-management activities in shifting cost trends.

  8. Leveraging Real-World Evidence in Disease-Management Decision-Making with a Total Cost of Care Estimator

    PubMed Central

    Nguyen, Thanh-Nghia; Trocio, Jeffrey; Kowal, Stacey; Ferrufino, Cheryl P.; Munakata, Julie; South, Dell

    2016-01-01

    Background Health management is becoming increasingly complex, given a range of care options and the need to balance costs and quality. The ability to measure and understand drivers of costs is critical for healthcare organizations to effectively manage their patient populations. Healthcare decision makers can leverage real-world evidence to explore the value of disease-management interventions in shifting total cost trends. Objective To develop a real-world, evidence-based estimator that examines the impact of disease-management interventions on the total cost of care (TCoC) for a patient population with nonvalvular atrial fibrillation (NVAF). Methods Data were collected from a patient-level real-world evidence data set that uses the IMS PharMetrics Health Plan Claims Database. Pharmacy and medical claims for patients meeting the inclusion or exclusion criteria were combined in longitudinal cohorts with a 180-day preindex and 360-day follow-up period. Descriptive statistics, such as mean and median patient costs and event rates, were derived from a real-world evidence analysis and were used to populate the base-case estimates within the TCoC estimator, an exploratory economic model that was designed to estimate the potential impact of several disease-management activities on the TCoC for a patient population with NVAF. Using Microsoft Excel, the estimator is designed to compare current direct costs of medical care to projected costs by varying assumptions on the impact of disease-management activities and applying the associated changes in cost trends to the affected populations. Disease-management levers are derived from literature-based concepts affecting costs along the NVAF disease continuum. The use of the estimator supports analyses across 4 US geographic regions, age, cost types, and care settings during 1 year. Results All patients included in the study were continuously enrolled in their health plan (within the IMS PharMetrics Health Plan Claims Database) between July 1, 2010, and June 30, 2012. Patients were included in the final analytic file and were indexed based on (1) the service date of the first claim within the selection window (December 28, 2010-July 11, 2011) with a diagnosis of NVAF, or (2) the service date of the second claim for an NVAF medication of interest during the same selection window. The model estimates the current trends in national benchmark data for a hypothetical health plan with 1 million covered lives. The annual total direct healthcare costs (allowable and patient out-of-pocket costs) of managing patients with NVAF in this hypothetical plan are estimated at $184,981,245 ($25,754 per patient, for 7183 patients). A potential 25% improvement from the base-case disease burden and disease management could translate into TCoC savings from reducing the excess costs related to hypertension (−5.3%) and supporting the use of an appropriate antithrombotic treatment that prevents ischemic stroke (−0.7%) and reduces bleeding events (−0.1%). Conclusions The use of the TCoC estimator supports population health management by providing real-world evidence benchmark data on NVAF disease burden and by quantifying the potential value of disease-management activities in shifting cost trends. PMID:28465775

  9. Doppler Imaging in Aortic Stenosis: The Importance of the Nonapical Imaging Windows to Determine Severity in a Contemporary Cohort.

    PubMed

    Thaden, Jeremy J; Nkomo, Vuyisile T; Lee, Kwang Je; Oh, Jae K

    2015-07-01

    Although the highest aortic valve velocity was thought to be obtained from imaging windows other than the apex in about 20% of patients with severe aortic stenosis (AS), its occurrence appears to be increasing as the age of patients has increased with the application of transcatheter aortic valve replacement. The aim of this study was to determine the frequency with which the highest peak jet velocity (Vmax) is found at each imaging window, the degree to which neglecting nonapical imaging windows underestimates AS severity, and factors influencing the location of the optimal imaging window in contemporary patients. Echocardiograms obtained in 100 consecutive patients with severe AS from January 3 to May 23, 2012, in which all imaging windows were interrogated, were retrospectively analyzed. AS severity (aortic valve area and mean gradient) was calculated on the basis of the apical imaging window alone and the imaging window with the highest peak jet velocity. The left ventricular-aortic root angle measured in the parasternal long-axis view as well as clinical variables were correlated with the location of highest peak jet velocity. Vmax was most frequently obtained in the right parasternal window (50%), followed by the apex (39%). Subjects with acute angulation more commonly had Vmax at the right parasternal window (65% vs 43%, P = .05) and less commonly had Vmax at the apical window (19% vs 48%, P = .005), but Vmax was still located outside the apical imaging window in 52% of patients with obtuse aortic root angles. If nonapical windows were neglected, 8% of patients (eight of 100) were misclassified from high-gradient severe AS to low-gradient severe AS, and another 15% (15 of 100) with severe AS (aortic valve area < 1.0 cm(2)) were misclassified as having moderate AS (aortic valve area > 1.0 cm(2)). In this contemporary cohort, Vmax was located outside the apical imaging window in 61% of patients, and neglecting the nonapical imaging windows resulted in the misclassification of AS severity in 23% of patients. Aortic root angulation as measured by two-dimensional echocardiography influences the location of Vmax modestly. Despite increasing time constraints on many echocardiography laboratories, these data confirm that routine Doppler interrogation from multiple imaging windows is critical to accurately determine the severity of AS in contemporary clinical practice. Copyright © 2015 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  10. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  11. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  12. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  13. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...

  14. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...

  15. Influence of sampling window size and orientation on parafoveal cone packing density

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe

    2013-01-01

    We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995

  16. The effects of window shape and reticle presence on performance in a vertical alignment task

    NASA Technical Reports Server (NTRS)

    Rosenberg, Erika L.; Haines, Richard F.; Jordan, Kevin

    1989-01-01

    This study was conducted to evaluate the effect of selected interior work-station orientational cuing upon the ability to align a target image with local vertical in the frontal plane. Angular error from gravitational vertical in an alignment task was measured for 20 observers viewing through two window shapes (square, round), two initial orientations of a computer-generated space shuttle image, and the presence or absence of a stabilized optical alignment reticle. In terms of overall accuracy, it was found that observer error was significantly smaller for the square window and reticle-present conditions than for the round window and reticle-absent conditions. Response bias data reflected an overall tendency to undershoot and greater variability of response in the round window/no reticle condition. These results suggest that environmental cuing information, such as that provided by square window frames and alignment reticles, may aid in subjective orientation and increase accuracy of response in a Space Station proximity operations alignment task.

  17. Infrared sensor and window system issues

    NASA Astrophysics Data System (ADS)

    Hargraves, Charles H., Jr.; Martin, James M.

    1992-12-01

    EO/IR windows are a significant challenge for the weapon system sensor designer who must design for high EO performance, low radar cross section (RCS), supersonic flight, durability, producibility and affordable initial and life cycle costs. This is particularly true in the 8 to 12 micron IR band at which window materials and coating choices are limited by system design requirements. The requirements also drive the optimization of numerous mechanical, optical, materials, and electrical parameters. This paper addresses the EO/IR window as a system design challenge. The interrelationship of the optical, mechanical, and system design processes are examined. This paper presents a summary of the test results, trade studies and analyses that were performed for multi-segment, flight-worthy optical windows with superior optical performance at subsonic and supersonic aircraft velocities and reduced radar cross section. The impact of the window assembly on EO system modulation transfer function (MTF) and sensitivity will be discussed. The use of conductive coatings for shielding/signature control will be discussed.

  18. Temperature rise and Heat build up inside a parked Car

    NASA Astrophysics Data System (ADS)

    Coady, Rose; Maheswaranathan, Ponn

    2001-11-01

    We have studied the heat build up inside a parked car under the hot summer Sun. Inside and outside temperatures were monitored every ten seconds from 9 AM to about 4 PM for a 2000 Toyota Camry parked in a Winthrop University parking lot without any shades or trees. Two PASCO temperature sensors, one inside the car and the other outside the car, are used along with PASCO-750 interface to collect the data. Data were collected under the following conditions while keeping track of the outside weather: fully closed windows, slightly open windows, half way open windows, fully open windows, and with window shades inside and outside. Inside temperatures reached as high as 150 degrees Fahrenheit on a sunny day with outside high temperature of about 100 degrees Fahrenheit. These results will be presented along with results from car cover and window tint manufacturers and suggestions to keep your car cool next time you park it under the Sun.

  19. Assessing Thermal Comfort Due to a Ventilated Double Window

    NASA Astrophysics Data System (ADS)

    Carlos, Jorge S.; Corvacho, Helena

    2017-10-01

    Building design and its components are the result of a complex process, which should provide pleasant conditions to its inhabitants. Therefore, indoor acceptable comfort is influenced by the architectural design. ISO and ASHRAE standards define thermal comfort as the condition of mind that expresses satisfaction with the thermal environment. The energy demand for heating, beside the building’s physical properties, also depend on human behaviour, like opening or closing windows. Generally, windows are the weakest façade element concerning to thermal performance. A lower thermal resistance allows higher thermal conduction through it. When a window is very hot or cold, and the occupant is very close to it, it may result in thermal discomfort. The functionality of a ventilated double window introduces new physical considerations to a traditional window. In consequence, it is necessary to study the local effect on human comfort in function of the boundary conditions. Wind, solar availability, air temperature and therefore heating and indoor air quality conditions will affect the relationship between this passive system and the indoor environment. In the present paper, the influence of thermal performance and ventilation on human comfort resulting from the construction and geometry solutions is shown, helping to choose the best solution. The presented approach shows that in order to save energy it is possible to reduce the air changes of a room to the minimum, without compromising air quality, enhancing simultaneously local thermal performance and comfort. The results of the study on the effect of two parallel windows with a ventilated channel in the same fenestration on comfort conditions for several different room dimensions, are also presented. As the room dimensions’ rate changes so does the window to floor rate; therefore, under the same climatic conditions and same construction solution, different results are obtained.

  20. Minimal Window Duration for Accurate HRV Recording in Athletes.

    PubMed

    Bourdillon, Nicolas; Schmitt, Laurent; Yazdani, Sasan; Vesin, Jean-Marc; Millet, Grégoire P

    2017-01-01

    Heart rate variability (HRV) is non-invasive and commonly used for monitoring responses to training loads, fitness, or overreaching in athletes. Yet, the recording duration for a series of RR-intervals varies from 1 to 15 min in the literature. The aim of the present work was to assess the minimum record duration to obtain reliable HRV results. RR-intervals from 159 orthostatic tests (7 min supine, SU, followed by 6 min standing, ST) were analyzed. Reference windows were 4 min in SU (min 3-7) and 4 min in ST (min 9-13). Those windows were subsequently divided and the analyses were repeated on eight different fractioned windows: the first min (0-1), the second min (1-2), the third min (2-3), the fourth min (3-4), the first 2 min (0-2), the last 2 min (2-4), the first 3 min (0-3), and the last 3 min (1-4). Correlation and Bland & Altman statistical analyses were systematically performed. The analysis window could be shortened to 0-2 instead of 0-4 for RMSSD only, whereas the 4-min window was necessary for LF and total power. Since there is a need for 1 min of baseline to obtain a steady signal prior the analysis window, we conclude that studies relying on RMSSD may shorten the windows to 3 min (= 1+2) in SU or seated position only and to 6 min (= 1+2 min SU plus 1+2 min ST) if there is an orthostatic test. Studies relying on time- and frequency-domain parameters need a minimum of 5 min (= 1+4) min SU or seated position only but require 10 min (= 1+4 min SU plus 1+4 min ST) for the orthostatic test.

  1. Transmittance of semitransparent windows with absorbing cap-shaped droplets condensed on their backside

    NASA Astrophysics Data System (ADS)

    Zhu, Keyong; Pilon, Laurent

    2017-11-01

    This study aims to investigate systematically light transfer through semitransparent windows with absorbing cap-shaped droplets condensed on their backside as encountered in greenhouses, solar desalination plants, photobioreactors and covered raceway ponds. The Monte Carlo ray-tracing method was used to predict the normal-hemispherical transmittance, reflectance, and normal absorptance accounting for reflection and refraction at the air/droplet, droplet/window, and window/air interfaces and absorption in both the droplets and the window. The droplets were monodisperse or polydisperse and arranged either in an ordered hexagonal pattern or randomly distributed on the backside with droplet contact angle θc ranging between 0 and 180° The normal-hemispherical transmittance was found to be independent of the spatial distribution of droplets. However, it decreased with increasing droplet diameter and polydispersity. The normal-hemispherical transmittance featured four distinct optical regimes for semitransparent window supporting nonabsorbing droplets. These optical regimes were defined based on contact angle and critical angle for internal reflection at the droplet/air interface. However, for strongly absorbing droplets, the normal-hemispherical transmittance (i) decreased monotonously with increasing contact angle for θc <90° and (ii) remained constant and independent of droplet absorption index kd, droplet mean diameter dm, and contact angle θc for θc ≥ 90° Analytical expressions for the normal-hemispherical transmittance were provided in the asymptotic cases when (1) the window was absorbing but the droplets were nonabsorbing with any contact angles θc, and (2) the droplets were strongly absorbing with contact angle θc >90° Finally, the spectral normal-hemispherical transmittance of a 3 mm-thick glass window supporting condensed water droplets for wavelength between 0.4 and 5 μm was predicted and discussed in light of the earlier parametric study and asymptotic behavior.

  2. Surgical anatomy of the round window-Implications for cochlear implantation.

    PubMed

    Luers, J C; Hüttenbrink, K B; Beutner, D

    2018-04-01

    The round window is an important portal for the application of active hearing aids and cochlear implants. The anatomical and topographical knowledge about the round window region is a prerequisite for successful insertion for a cochlear implant electrode. To sum up current knowledge about the round window anatomy and to give advice to the cochlear implant surgeon for optimal placement of an electrode. Systematic Medline search. Search term "round window[Title]" with no date restriction. Only publications in the English Language were included. All abstracts were screened for relevance, that is a focus on surgical anatomy of the round window. The search results were supplemented with hand searching of selected reviews and reference lists from included studies. Subjective assessment. There is substantial variability in size and shape of the round window. The round window is regarded as the most reliable surgical landmark to safely locate the scala tympani. Factors affecting the optimal trajectory line for atraumatic electrode insertion are anatomy of the round window, the anatomy of the intracochlear hook region and the variable orientation and size of the cochlea's basal turn. The very close relation to the sensitive inner ear structures necessitates a thorough anatomic knowledge and careful insertion technique, especially when implanting patients with residual hearing. In order to avoid electrode migration between the scalae and to achieve protect the modiolus and the basilar membrane, it is recommended to aim for an electrode insertion vector from postero-superior to antero-inferior. © 2017 John Wiley & Sons Ltd.

  3. Window Area and Development Drive Spatial Variation in Bird-Window Collisions in an Urban Landscape

    PubMed Central

    Hager, Stephen B.; Cosentino, Bradley J.; McKay, Kelly J.; Monson, Cathleen; Zuurdeeg, Walt; Blevins, Brian

    2013-01-01

    Collisions with windows are an important human-related threat to birds in urban landscapes. However, the proximate drivers of collisions are not well understood, and no study has examined spatial variation in mortality in an urban setting. We hypothesized that the number of fatalities at buildings varies with window area and habitat features that influence avian community structure. In 2010 we documented bird-window collisions (BWCs) and characterized avian community structure at 20 buildings in an urban landscape in northwestern Illinois, USA. For each building and season, we conducted 21 daily surveys for carcasses and nine point count surveys to estimate relative abundance, richness, and diversity. Our sampling design was informed by experimentally estimated carcass persistence times and detection probabilities. We used linear and generalized linear mixed models to evaluate how habitat features influenced community structure and how mortality was affected by window area and factors that correlated with community structure. The most-supported model was consistent for all community indices and included effects of season, development, and distance to vegetated lots. BWCs were related positively to window area and negatively to development. We documented mortalities for 16/72 (22%) species (34 total carcasses) recorded at buildings, and BWCs were greater for juveniles than adults. Based on the most-supported model of BWCs, the median number of annual predicted fatalities at study buildings was 3 (range = 0–52). These results suggest that patchily distributed environmental resources and levels of window area in buildings create spatial variation in BWCs within and among urban areas. Current mortality estimates place little emphasis on spatial variation, which precludes a fundamental understanding of the issue. To focus conservation efforts, we illustrate how knowledge of the structural and environmental factors that influence bird-window collisions can be used to predict fatalities in the broader landscape. PMID:23326420

  4. Indocyanine green fluorescence in second near-infrared (NIR-II) window

    PubMed Central

    Bhavane, Rohan; Ghaghada, Ketan B.; Vasudevan, Sanjeev A.; Kaay, Alexander; Annapragada, Ananth

    2017-01-01

    Indocyanine green (ICG), a FDA approved near infrared (NIR) fluorescent agent, is used in the clinic for a variety of applications including lymphangiography, intra-operative lymph node identification, tumor imaging, superficial vascular imaging, and marking ischemic tissues. These applications operate in the so-called “NIR-I” window (700–900 nm). Recently, imaging in the “NIR-II” window (1000–1700 nm) has attracted attention since, at longer wavelengths, photon absorption, and scattering effects by tissue components are reduced, making it possible to image deeper into the underlying tissue. Agents for NIR-II imaging are, however, still in pre-clinical development. In this study, we investigated ICG as a NIR-II dye. The absorbance and NIR-II fluorescence emission of ICG were measured in different media (PBS, plasma and ethanol) for a range of ICG concentrations. In vitro and in vivo testing were performed using a custom-built spectral NIR assembly to facilitate simultaneous imaging in NIR-I and NIR-II window. In vitro studies using ICG were performed using capillary tubes (as a simulation of blood vessels) embedded in Intralipid solution and tissue phantoms to evaluate depth of tissue penetration in NIR-I and NIR-II window. In vivo imaging using ICG was performed in nude mice to evaluate vascular visualization in the hind limb in the NIR-I and II windows. Contrast-to-noise ratios (CNR) were calculated for comparison of image quality in NIR-I and NIR-II window. ICG exhibited significant fluorescence emission in the NIR-II window and this emission (similar to the absorption profile) is substantially affected by the environment of the ICG molecules. In vivo imaging further confirmed the utility of ICG as a fluorescent dye in the NIR-II domain, with the CNR values being ~2 times those in the NIR-I window. The availability of an FDA approved imaging agent could accelerate the clinical translation of NIR-II imaging technology. PMID:29121078

  5. Window area and development drive spatial variation in bird-window collisions in an urban landscape.

    PubMed

    Hager, Stephen B; Cosentino, Bradley J; McKay, Kelly J; Monson, Cathleen; Zuurdeeg, Walt; Blevins, Brian

    2013-01-01

    Collisions with windows are an important human-related threat to birds in urban landscapes. However, the proximate drivers of collisions are not well understood, and no study has examined spatial variation in mortality in an urban setting. We hypothesized that the number of fatalities at buildings varies with window area and habitat features that influence avian community structure. In 2010 we documented bird-window collisions (BWCs) and characterized avian community structure at 20 buildings in an urban landscape in northwestern Illinois, USA. For each building and season, we conducted 21 daily surveys for carcasses and nine point count surveys to estimate relative abundance, richness, and diversity. Our sampling design was informed by experimentally estimated carcass persistence times and detection probabilities. We used linear and generalized linear mixed models to evaluate how habitat features influenced community structure and how mortality was affected by window area and factors that correlated with community structure. The most-supported model was consistent for all community indices and included effects of season, development, and distance to vegetated lots. BWCs were related positively to window area and negatively to development. We documented mortalities for 16/72 (22%) species (34 total carcasses) recorded at buildings, and BWCs were greater for juveniles than adults. Based on the most-supported model of BWCs, the median number of annual predicted fatalities at study buildings was 3 (range = 0-52). These results suggest that patchily distributed environmental resources and levels of window area in buildings create spatial variation in BWCs within and among urban areas. Current mortality estimates place little emphasis on spatial variation, which precludes a fundamental understanding of the issue. To focus conservation efforts, we illustrate how knowledge of the structural and environmental factors that influence bird-window collisions can be used to predict fatalities in the broader landscape.

  6. Effects of the window openings on the micro-environmental condition in a school bus

    NASA Astrophysics Data System (ADS)

    Li, Fei; Lee, Eon S.; Zhou, Bin; Liu, Junjie; Zhu, Yifang

    2017-10-01

    School bus is an important micro-environment for children's health because the level of in-cabin air pollution can increase due to its own exhaust in addition to on-road traffic emissions. However, it has been challenging to understand the in-cabin air quality that is associated with complex airflow patterns inside and outside a school bus. This study conducted Computational Fluid Dynamics (CFD) modeling analyses to determine the effects of window openings on the self-pollution for a school bus. Infiltration through the window gaps is modeled by applying variable numbers of active computational cells as a function of the effective area ratio of the opening. The experimental data on ventilation rates from the literature was used to validate the model. Ultrafine particles (UFPs) and black carbon (BC) concentrations were monitored in ;real world; field campaigns using school buses. This modeling study examined the airflow pattern inside the school bus under four different types of side-window openings at 20, 40, and 60 mph (i.e., a total of 12 cases). We found that opening the driver's window could allow the infiltration of exhaust through window/door gaps in the back of school bus; whereas, opening windows in the middle of the school bus could mitigate this phenomenon. We also found that an increased driving speed (from 20 mph to 60 mph) could result in a higher ventilation rate (up to 3.4 times) and lower mean age of air (down to 0.29 time) inside the bus.

  7. Frequency of open windows in motor vehicles under varying temperature conditions: a videotape survey in Central North Carolina during 2001.

    PubMed

    Long, Tom; Johnson, Ted; Ollison, Will

    2004-07-01

    Air pollution exposures in the motor vehicle cabin are significantly affected by air exchange rate, a function of vehicle speed, window position, vent status, fan speed, and air conditioning use. A pilot study conducted in Houston, Texas, during September 2000 demonstrated that useful information concerning the position of windows, sunroofs, and convertible tops as a function of temperature and vehicle speed could be obtained through the use of video recorders. To obtain similar data representing a wide range of temperature and traffic conditions, a follow-up study was conducted in and around Chapel Hill, North Carolina at five sites representing a central business district, an arterial road, a low-income commercial district, an interstate highway, and a rural road. Each site permitted an elevated view of vehicles as they proceeded through a turn, thereby exposing all windows to the stationary camcorder. A total of 32 videotaping sessions were conducted between February and October 2001, in which temperature varied from 41 degrees F to 93 degrees F and average vehicle speed varied from 21 to 77 mph. The resulting video tapes were processed to create a vehicle-specific database that included site location, date, time, vehicle type, vehicle color, vehicle age, window configuration, number of windows in each of three position categories (fully open, partially open, and closed), meteorological factors, and vehicle speed. Of the 4715 vehicles included in the database, 1905 (40.4%) were labeled as "open," indicating a window, sunroof, or convertible top was fully or partially open. Stepwise linear regression analyses indicated that "open" window status was affected by wind speed, relative humidity, vehicle speed, cloud cover, apparent temperature, day of week, time of day, vehicle type, vehicle age, vehicle color, number of windows, sunroofs, location, and air quality season. Open windows tended to occur less frequently when relative humidity was high, apparent temperature (a parameter incorporating wind chill and heat index) was below 50 degrees F, or the vehicle was relatively new. Although the effects of the identified parameters were relatively weak, they are statistically significant and should be considered by researchers attempting to model vehicle air exchange rates.

  8. Thermal damage study of beryllium windows used as vacuum barriers in synchrotron radiation beamlines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holdener, F.R.; Johnson, G.L.; Karpenko, V.P.

    An experimental study to investigate thermal-induced damage to SSRL-designed beryllium foil windows was performed at LLNL's Laser Welding Research Facility. The primary goal of this study was to determine the threshold at which thermal-stress-induced damage occurs in these commonly used vacuum barriers. An Nd:Yag pulsed laser with cylindrical optics and a carefully designed test cell provided a test environment that closely resembles the actual beamline conditions at SSRL. Tests performed on two beryllium window geometries, with different vertical aperture dimensions but equal foil thicknesses of 0.254 mm, resulted in two focused total-power thresholds at which incipient damage was determined. Formore » a beam spot size similar to that of the Beamline-X Wiggler Line, onset of surface damage for a 5-mm by 25-mm aperture window was observed at 170 W after 174,000 laser pulses (1.2-ms pulse at 100 pps). A second window with double the vertical aperture dimension (10 mm by 25 mm) was observed to have surface cracking after 180,000 laser pulses with 85 W impinging its front surface. It failed after approximately 1,000,000 pulses. Another window of the same type (10 mm by 25 mm) received 2,160,000 laser pulses at 74.4 W, and subsequent metallographic sectioning revealed no signs of through-thickness damage. Comparison of windows with equal foil thicknesses and aperture dimensions has effectively identified the heat flux limit for incipient failure. The data show that halving the aperture's vertical dimension allows doubling the total incident power for equivalent onsets of thermal-induced damage.« less

  9. Impact of Windows and Daylight Exposure on Overall Health and Sleep Quality of Office Workers: A Case-Control Pilot Study

    PubMed Central

    Boubekri, Mohamed; Cheung, Ivy N.; Reid, Kathryn J.; Wang, Chia-Hui; Zee, Phyllis C.

    2014-01-01

    Study Objective: This research examined the impact of daylight exposure on the health of office workers from the perspective of subjective well-being and sleep quality as well as actigraphy measures of light exposure, activity, and sleep-wake patterns. Methods: Participants (N = 49) included 27 workers working in windowless environments and 22 comparable workers in workplaces with significantly more daylight. Windowless environment is defined as one without any windows or one where workstations were far away from windows and without any exposure to daylight. Well-being of the office workers was measured by Short Form-36 (SF-36), while sleep quality was measured by Pittsburgh Sleep Quality Index (PSQI). In addition, a subset of participants (N = 21; 10 workers in windowless environments and 11 workers in workplaces with windows) had actigraphy recordings to measure light exposure, activity, and sleep-wake patterns. Results: Workers in windowless environments reported poorer scores than their counterparts on two SF-36 dimensions—role limitation due to physical problems and vitality—as well as poorer overall sleep quality from the global PSQI score and the sleep disturbances component of the PSQI. Compared to the group without windows, workers with windows at the workplace had more light exposure during the workweek, a trend toward more physical activity, and longer sleep duration as measured by actigraphy. Conclusions: We suggest that architectural design of office environments should place more emphasis on sufficient daylight exposure of the workers in order to promote office workers' health and well-being. Citation: Boubekri M, Cheung IN, Reid KJ, Wang CH, Zee PC. Impact of windows and daylight exposure on overall health and sleep quality of office workers: a case-control pilot study. J Clin Sleep Med 2014;10(6):603-611. PMID:24932139

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandor, Debra; Chung, Donald; Keyser, David

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  11. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  12. Egg storage duration and hatch window affect gene expression of nutrient transporters and intestine morphological parameters of early hatched broiler chicks.

    PubMed

    Yalcin, S; Gursel, I; Bilgen, G; Izzetoglu, G T; Horuluoglu, B H; Gucluer, G

    2016-05-01

    In recent years, researchers have given emphasis on the differences in physiological parameters between early and late hatched chicks within a hatch window. Considering the importance of intestine development in newly hatched chicks, however, changes in gene expression of nutrient transporters in the jejunum of early hatched chicks within a hatch window have not been studied yet. This study was conducted to determine the effects of egg storage duration before incubation and hatch window on intestinal development and expression of PepT1 (H+-dependent peptide transporter) and SGLT1 (sodium-glucose co-transporter) genes in the jejunum of early hatched broiler chicks within a 30 h of hatch window. A total of 1218 eggs obtained from 38-week-old Ross 308 broiler breeder flocks were stored for 3 (ES3) or 14 days (ES14) and incubated at the same conditions. Eggs were checked between 475 and 480 h of incubation and 40 chicks from each egg storage duration were weighed; chick length and rectal temperature were measured. The chicks were sampled to evaluate morphological parameters and PepT1 and SGLT1 expression. The remaining chicks that hatched between 475 and 480 h were placed back in the incubator and the same measurements were conducted with those chicks at the end of hatch window at 510 h of incubation. Chick length, chick dry matter content, rectal temperature and weight of small intestine segments increased, whereas chick weight decreased during the hatch window. The increase in the jejunum length and villus width and area during the hatch window were higher for ES3 than ES14 chicks. PepT1 expression was higher for ES3 chicks compared with ES14. There was a 10.2 and 17.6-fold increase in PepT1 and SGLT1 expression of ES3 chicks at the end of hatch window, whereas it was only 2.3 and 3.3-fold, respectively, for ES14 chicks. These results suggested that egg storage duration affected development of early hatched chicks during 30 h of hatch window. It can be concluded that the ES14 chicks would be less efficiently adapted to absorption process for carbohydrates and protein than those from ES3 at the end of the hatch window.

  13. Advanced Imaging Approaches to Characterize Stromal and Metabolic Changes in In Vivo Mammary Tumor Models

    DTIC Science & Technology

    2013-03-01

    characterization and toward future intravital studies. Preliminary fluorescence lifetime images were also collected intravitally through a mammary imaging window...intend to use this characterization to understand shifts in fluorescence lifetime collected by intravital imaging using a mammary imaging window...collected intravitally through a mammary imaging window implanted in a female, PyVT positive, Col1a1 heterozygote, mouse (Figure 7). A paper has

  14. Varying behavior of different window sizes on the classification of static and dynamic physical activities from a single accelerometer.

    PubMed

    Fida, Benish; Bernabucci, Ivan; Bibbo, Daniele; Conforto, Silvia; Schmid, Maurizio

    2015-07-01

    Accuracy of systems able to recognize in real time daily living activities heavily depends on the processing step for signal segmentation. So far, windowing approaches are used to segment data and the window size is usually chosen based on previous studies. However, literature is vague on the investigation of its effect on the obtained activity recognition accuracy, if both short and long duration activities are considered. In this work, we present the impact of window size on the recognition of daily living activities, where transitions between different activities are also taken into account. The study was conducted on nine participants who wore a tri-axial accelerometer on their waist and performed some short (sitting, standing, and transitions between activities) and long (walking, stair descending and stair ascending) duration activities. Five different classifiers were tested, and among the different window sizes, it was found that 1.5 s window size represents the best trade-off in recognition among activities, with an obtained accuracy well above 90%. Differences in recognition accuracy for each activity highlight the utility of developing adaptive segmentation criteria, based on the duration of the activities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  16. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration.

    PubMed

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.

  17. Techniques and Results for Determining Window Placement and Configuration for the Small Pressurized Rover (SPR)

    NASA Technical Reports Server (NTRS)

    Thompson, Shelby; Litaker, Harry; Howard, Robert

    2009-01-01

    A natural component to driving any type of vehicle, be it Earth-based or space-based, is visibility. In its simplest form visibility is a measure of the distance at which an object can be seen. With the National Aeronautics and Space Administration s (NASA) Space Shuttle and the International Space Station (ISS), there are human factors design guidelines for windows. However, for planetary exploration related vehicles, especially land-based vehicles, relatively little has been written on the importance of windows. The goal of the current study was to devise a proper methodology and to obtain preliminary human-in-the-loop data on window placement and location for the small pressurized rover (SPR). Nine participants evaluated multiple areas along the vehicle s front "nose", while actively maneuvering through several lunar driving simulations. Subjective data was collected on seven different aspects measuring areas of necessity, frequency of views, and placement/configuration of windows using questionnaires and composite drawings. Results indicated a desire for a large horizontal field-of-view window spanning the front of the vehicle for most driving situations with slightly reduced window areas for the lower front, lower corners, and side views.

  18. Cross-industry benchmarking: is it applicable to the operating room?

    PubMed

    Marco, A P; Hart, S

    2001-01-01

    The use of benchmarking has been growing in nonmedical industries. This concept is being increasingly applied to medicine as the industry strives to improve quality and improve financial performance. Benchmarks can be either internal (set by the institution) or external (use other's performance as a goal). In some industries, benchmarking has crossed industry lines to identify breakthroughs in thinking. In this article, we examine whether the airline industry can be used as a source of external process benchmarking for the operating room.

  19. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  20. Celebrations. Windows on Social Studies: Multiculutural Adventures through Literature.

    ERIC Educational Resources Information Center

    Westley, Joan; Melton, Holly

    This resource book is one in a series containing lesson plans for grades 1-3 designed to support children's literature books sharing familiar social studies themes. "Celebrations" presents eight different children's books related to the theme. For each book social studies concepts are presented, followed by four activities called "windows." Some…

Top