Sample records for technical performance metrics

  1. Metric Supplement to Technical Drawing.

    ERIC Educational Resources Information Center

    Henschel, Mark

    This manual is intended for use in training persons whose vocations involve technical drawing to use the metric system of measurement. It could be used in a short course designed for that purpose or for individual study. The manual begins with a brief discussion of the rationale for conversion to the metric system. It then provides a…

  2. Metric Education; A Position Paper for Vocational, Technical and Adult Education.

    ERIC Educational Resources Information Center

    Cooper, Gloria S.; And Others

    Part of an Office of Education three-year project on metric education, the position paper is intended to alert and prepare teachers, curriculum developers, and administrators in vocational, technical, and adult education to the change over to the metric system. The five chapters cover issues in metric education, what the metric system is all…

  3. Performance Metrics for Liquid Chromatography-Tandem Mass Spectrometry Systems in Proteomics Analyses*

    PubMed Central

    Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.

    2010-01-01

    A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981

  4. Metric Conversion in the Construction Industries--Technical Issues and Status.

    ERIC Educational Resources Information Center

    Milton, Hans J.; Berry, Sandra A.

    This Special Publication was prepared at the request of the Metric Symposium Planning Committee of the National Institute of Building Sciences (NIBS). It is intended to provide information on technical issues and status of metric conversion in the United States construction industries. It was made available to attendees at the NIBS Symposium on…

  5. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  6. Measure for Measure: A Guide to Metrication for Workshop Crafts and Technical Studies.

    ERIC Educational Resources Information Center

    Schools Council, London (England).

    This booklet is designed to help teachers of the industrial arts in Great Britain during the changeover to metric units which is due to be substantially completed during the period 1970-1975. General suggestions are given for adapting equipment in metalwork and engineering and woodwork and technical drawing by adding some metric equipment…

  7. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular

  8. Assessing technical performance in differential gene expression experiments with external spike-in RNA control ratio mixtures.

    PubMed

    Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc

    2014-09-25

    There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.

  9. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  10. Texture metric that predicts target detection performance

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.

    2015-12-01

    Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.

  11. Objective assessment based on motion-related metrics and technical performance in laparoscopic suturing.

    PubMed

    Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J

    2017-02-01

    The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing

  12. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  13. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  14. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  16. DEVELOPMENT OF METRICS FOR PROTOCOLS AND OTHER TECHNICAL PRODUCTS.

    PubMed

    Veiga, Daniela Francescato; Ferreira, Lydia Masako

    2015-01-01

    To develop a proposal for metrics for protocols and other technical products to be applied in assessing the Postgraduate Programs of Medicine III - Capes. The 2013 area documents of all the 48 Capes areas were read. From the analysis of the criteria used by the areas at the 2013's Triennal Assessment, a proposal for metrics for protocols and other technical products was developed to be applied in assessing the Postgraduate Programs of Medicine III. This proposal was based on the criteria of Biological Sciences I and Interdisciplinary areas. Only seven areas have described a scoring system for technical products. The products considered and the scoring varied widely. Due to the wide range of different technical products which could be considered relevant, and that would not be punctuated if they were not previously specified, it was developed, for the Medicine III, a proposal for metrics in which five specific criteria to be analyzed: Demand, Relevance/Impact, Scope, Complexity and Adherence to the Program. Based on these criteria, each product can receive 10 to 100 points. This proposal can be applied to the item Intellectual Production of the evaluation form, in subsection "Technical production, patents and other relevant production". The program will be scored as Very Good when it reaches mean ≥150 points/permanent professor/quadrennium; Good, mean between 100 and 149 points; Regular, mean between 60 and 99 points; Weak mean between 30 and 59 points; Insufficient, up to 29 points/permanent professor/quadrennium. Desenvolver proposta de métricas para protocolos e outras produções técnicas a serem aplicadas na avaliação dos Programas de Pós-Graduação da Área Medicina III da Capes. Foram lidos os documentos de área de 2013 de todas as 48 Áreas da Capes. A partir da análise dos critérios utilizados por elas na avaliação trienal 2013, foi desenvolvida uma proposta de métricas para protocolos e outras produções técnicas. Esta proposta foi baseada

  17. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  18. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  19. 75 FR 7581 - RTO/ISO Performance Metrics; Notice Requesting Comments on RTO/ISO Performance Metrics

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... of staff from all the jurisdictional ISOs/RTOs to develop a set of performance metrics that the ISOs/RTOs will use to report annually to the Commission. Commission staff and representatives from the ISOs...

  20. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  1. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  2. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    PubMed

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts

  3. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, B K; Glenzer, S; Edwards, M J

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called themore » experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.« less

  4. The importance of metrics for evaluating scientific performance

    NASA Astrophysics Data System (ADS)

    Miyakawa, Tsuyoshi

    Evaluation of scientific performance is a major factor that determines the behavior of both individual researchers and the academic institutes to which they belong. Because the number of researchers heavily outweighs the number of available research posts, and the competitive funding accounts for an ever-increasing proportion of research budget, some objective indicators of research performance have gained recognition for increasing transparency and openness. It is common practice to use metrics and indices to evaluate a researcher's performance or the quality of their grant applications. Such measures include the number of publications, the number of times these papers are cited and, more recently, the h-index, which measures the number of highly-cited papers the researcher has written. However, academic institutions and funding agencies in Japan have been rather slow to adopt such metrics. In this article, I will outline some of the currently available metrics, and discuss why we need to use such objective indicators of research performance more often in Japan. I will also discuss how to promote the use of metrics and what we should keep in mind when using them, as well as their potential impact on the research community in Japan.

  5. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  6. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  8. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  9. METRICS DEVELOPMENT FOR THE QUALIS OF SOFTWARE TECHNICAL PRODUCTION.

    PubMed

    Scarpi, Marinho Jorge

    2015-01-01

    To recommend metrics to qualify software production and to propose guidelines for the CAPES quadrennial evaluation of the Post-Graduation Programs of Medicine III about this issue. Identification of the development process quality features, of the product attributes and of the software use, determined by Brazilian Association of Technical Standards (ABNT), International Organization Standardization (ISO) and International Electrotechnical (IEC), important in the perspective of the CAPES Medicine III Area correlate users, basing the creation proposal of metrics aiming to be used on four-year evaluation of Medicine III. The in use software quality perception by the user results from the provided effectiveness, productivity, security and satisfaction that originate from its characteristics of functionality, reliability, usability, efficiency, maintainability and portability (in use metrics quality). This perception depends on the specific use scenario. The software metrics should be included in the intellectual production of the program, considering the system behavior measurements results obtained by users' performance evaluation through out the favorable responses punctuation sum for the six in use metrics quality (27 sub-items, 0 to 2 points each) and for quality perception proof (four items, 0 to 10 points each). It will be considered as very good (VG) 85 to 94 points; good (G) 75 to 84 points; regular (R) 65 to 74 points; weak (W) 55 to 64 points; poor (P) <55 points. Recomendar métrica para qualificar a produção de software propondo diretrizes para a avaliação dos Programas de Pós-Graduação da Medicina III. Identificação das características de qualidade para o processo de desenvolvimento, para os atributos do produto e para o uso de software, determinadas pela Associação Brasileira de Normas Técnicas (ABNT), International Organization Standardization (ISO) e International Electrotechnical (IEC), importantes na perspectiva dos usuários correlatos

  10. Metric-driven harm: an exploration of unintended consequences of performance measurement.

    PubMed

    Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck

    2013-11-01

    Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.

  11. Metrics for building performance assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koles, G.; Hitchcock, R.; Sherman, M.

    This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs,more » (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.« less

  12. Snow removal performance metrics : final report.

    DOT National Transportation Integrated Search

    2017-05-01

    This document is the final report for the Clear Roads project entitled Snow Removal Performance Metrics. The project team was led by researchers at Washington State University on behalf of Clear Roads, an ongoing pooled fund research effort focused o...

  13. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  14. Performance Metrics for Soil Moisture Retrievals and Applications Requirements

    USDA-ARS?s Scientific Manuscript database

    Quadratic performance metrics such as root-mean-square error (RMSE) and time series correlation are often used to assess the accuracy of geophysical retrievals and true fields. These metrics are generally related; nevertheless each has advantages and disadvantages. In this study we explore the relat...

  15. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  16. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  17. The Use of Performance Metrics for the Assessment of Safeguards Effectiveness at the State Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachner K. M.; George Anzelon, Lawrence Livermore National Laboratory, Livermore, CA Yana Feldman, Lawrence Livermore National Laboratory, Livermore, CA Mark Goodman,Department of State, Washington, DC Dunbar Lockwood, National Nuclear Security Administration, Washington, DC Jonathan B. Sanborn, JBS Consulting, LLC, Arlington, VA.

    In the ongoing evolution of International Atomic Energy Agency (IAEA) safeguards at the state level, many safeguards implementation principles have been emphasized: effectiveness, efficiency, non-discrimination, transparency, focus on sensitive materials, centrality of material accountancy for detecting diversion, independence, objectivity, and grounding in technical considerations, among others. These principles are subject to differing interpretations and prioritizations and sometimes conflict. This paper is an attempt to develop metrics and address some of the potential tradeoffs inherent in choices about how various safeguards policy principles are implemented. The paper carefully defines effective safeguards, including in the context of safeguards approaches that take accountmore » of the range of state-specific factors described by the IAEA Secretariat and taken note of by the Board in September 2014, and (2) makes use of performance metrics to help document, and to make transparent, how safeguards implementation would meet such effectiveness requirements.« less

  18. Performance metrics used by freight transport providers.

    DOT National Transportation Integrated Search

    2008-09-30

    The newly-established National Cooperative Freight Research Program (NCFRP) has allocated $300,000 in funding to a project entitled Performance Metrics for Freight Transportation (NCFRP 03). The project is scheduled for completion in September ...

  19. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  20. Performance metrics for inertial confinement fusion implosions: Aspects of the technical framework for measuring progress in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Brian K.; Glenzer, S.; Edwards, M. J.

    The National Ignition Campaign (NIC) uses non-igniting 'tritium hydrogen deuterium (THD)' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off-nominal implosions. We will focus on the development of an experimental implosion performance metricmore » called the experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.« less

  1. Climate Classification is an Important Factor in ­Assessing Hospital Performance Metrics

    NASA Astrophysics Data System (ADS)

    Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.

    2017-12-01

    Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.

  2. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  3. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  4. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  5. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  6. Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?

    PubMed

    Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J

    2014-02-01

    Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual

  7. Stability and Performance Metrics for Adaptive Flight Control

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens

    2009-01-01

    This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.

  8. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty

    PubMed Central

    Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods

  9. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.

    PubMed

    Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods

  10. Revisiting the utility of technical performance scores following tetralogy of Fallot repair.

    PubMed

    Lodin, Daud; Mavrothalassitis, Orestes; Haberer, Kim; Sunderji, Sherzana; Quek, Ruben G W; Peyvandi, Shabnam; Moon-Grady, Anita; Karamlou, Tara

    2017-08-01

    Although an important quality metric, current technical performance scores may not be generalizable and may omit operative factors that influence outcomes. We examined factors not included in current technical performance scores that may contribute to increased postoperative length of stay, major complications, and cost after primary repair of tetralogy of Fallot. This is a retrospective single site study of patients younger than age 2 years with tetralogy of Fallot undergoing complete repair between 2007 and 2015. Medical record data and discharge echocardiograms were reviewed to ascertain component and composite technical performance scores. Primary outcomes included postoperative length of stay, major complications, and total hospital costs. Multivariable logistic and linear regression identified determinants of each outcome. Patient population (n = 115) had a median postoperative length of stay of 8 days (interquartile range, 6-10 days), and a median total cost of $71,147. Major complications occurred in 33 patients (29%) with 1 death. Technical performance scores assigned were optimum in 28 patients (25%), adequate in 59 patients (52%), and inadequate in 26 patients (23%). Neither technical performance score components nor composite scores were associated with increased postoperative length of stay. Optimum or adequate repairs versus inadequate had equal risk of a complication (P = .79), and equivalent mean total cost ($100,000 vs $187,000; P = .25). Longer cardiopulmonary bypass time per 1-minute increase (P < .01) was associated with longer postoperative length of stay and reintervention (P = .02). The need to return to bypass also increased total cost (P < .01). Current tetralogy of Fallot technical performance scores were not associated with selected outcomes in our postoperative population. Although returning to bypass and bypass length are not included as components in the current score, these are important factors influencing

  11. GPS Device Testing Based on User Performance Metrics

    DOT National Transportation Integrated Search

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  12. Metrics help rural hospitals achieve world-class performance.

    PubMed

    Goodspeed, Scott W

    2006-01-01

    This article describes the emerging trend of using metrics in rural hospitals to achieve world-class performance. This trend is a response to the fact that rural hospitals have small patient volumes yet must maintain a profit margin in order to fulfill their mission to the community. The conceptual idea for this article is based largely on Robert Kaplan and David Norton's Balanced Scorecard articles in the Harvard Business Review. The ideas also come from the experiences of the 60-plus rural hospitals that are using the Balanced Scorecard and their implementation of metrics to influence performance and behavior. It is indeed possible for rural hospitals to meet and exceed the unique needs of patients and physicians (customers), to achieve healthy profit margins, and to be the rural hospital of choice that employees are proud to work for.

  13. Grading the Metrics: Performance-Based Funding in the Florida State University System

    ERIC Educational Resources Information Center

    Cornelius, Luke M.; Cavanaugh, Terence W.

    2016-01-01

    A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…

  14. Impact of Different Economic Performance Metrics on the Perceived Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2011-10-01

    Photovoltaic (PV) systems are installed by several types of market participants, ranging from residential customers to large-scale project developers and utilities. Each type of market participant frequently uses a different economic performance metric to characterize PV value because they are looking for different types of returns from a PV investment. This report finds that different economic performance metrics frequently show different price thresholds for when a PV investment becomes profitable or attractive. Several project parameters, such as financing terms, can have a significant impact on some metrics [e.g., internal rate of return (IRR), net present value (NPV), and benefit-to-cost (B/C)more » ratio] while having a minimal impact on other metrics (e.g., simple payback time). As such, the choice of economic performance metric by different customer types can significantly shape each customer's perception of PV investment value and ultimately their adoption decision.« less

  15. Performance evaluation of objective quality metrics for HDR image compression

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  16. A Case Study Based Analysis of Performance Metrics for Green Infrastructure

    NASA Astrophysics Data System (ADS)

    Gordon, B. L.; Ajami, N.; Quesnel, K.

    2017-12-01

    Aging infrastructure, population growth, and urbanization are demanding new approaches to management of all components of the urban water cycle, including stormwater. Traditionally, urban stormwater infrastructure was designed to capture and convey rainfall-induced runoff out of a city through a network of curbs, gutters, drains, and pipes, also known as grey infrastructure. These systems were planned with a single-purpose and designed under the assumption of hydrologic stationarity, a notion that no longer holds true in the face of a changing climate. One solution gaining momentum around the world is green infrastructure (GI). Beyond stormwater quality improvement and quantity reduction (or technical benefits), GI solutions offer many environmental, economic, and social benefits. Yet many practical barriers have prevented the widespread adoption of these systems worldwide. At the center of these challenges is the inability of stakeholders to know how to monitor, measure, and assess the multi-sector performance of GI systems. Traditional grey infrastructure projects require different monitoring strategies than natural systems; there are no overarching policies on how to best design GI monitoring and evaluation systems and measure performance. Previous studies have attempted to quantify the performance of GI, mostly using one evaluation method on a specific case study. We use a case study approach to address these knowledge gaps and develop a conceptual model of how to evaluate the performance of GI through the lens of financing. First, we examined many different case studies of successfully implemented GI around the world. Then we narrowed in on 10 exemplary case studies. For each case studies, we determined what performance method the project developer used such as LCA, TBL, Low Impact Design Assessment (LIDA) and others. Then, we determined which performance metrics were used to determine success and what data was needed to calculate those metrics. Finally, we

  17. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing

  18. Metrics for measuring performance of market transformation initiatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, F.; Schlegel, J.; Grabner, K.

    1998-07-01

    Regulators have traditionally rewarded utility efficiency programs based on energy and demand savings. Now, many regulators are encouraging utilities and other program administrators to save energy by transforming markets. Prior to achieving sustainable market transformation, the program administrators often must take actions to understand the markets, establish baselines for success, reduce market barriers, build alliances, and build market momentum. Because these activities often precede savings, year-by-year measurement of savings can be an inappropriate measure of near-term success. Because ultimate success in transforming markets is defined in terms of sustainable changes in market structure and practice, traditional measures of success canmore » also be misleading as initiatives reach maturity. This paper reviews early efforts in Massachusetts to develop metrics, or yardsticks, to gauge regulatory rewards for utility market transformation initiatives. From experience in multiparty negotiations, the authors review options for metrics based alternatively on market effects, outcomes, and good faith implementation. Additionally, alternative approaches are explored, based on end-results, interim results, and initial results. The political and practical constraints are described which have thus far led to a preference for one-year metrics, based primarily on good faith implementation. Strategies are offered for developing useful metrics which might be acceptable to regulators, advocates, and program administrators. Finally, they emphasize that the use of market transformation performance metrics is in its infancy. Both regulators and program administrators are encouraged to advance into this area with an experimental mind-set; don't put all the money on one horse until there's more of a track record.« less

  19. Resilience-based performance metrics for water resources management under uncertainty

    NASA Astrophysics Data System (ADS)

    Roach, Tom; Kapelan, Zoran; Ledbetter, Ralph

    2018-06-01

    This paper aims to develop new, resilience type metrics for long-term water resources management under uncertain climate change and population growth. Resilience is defined here as the ability of a water resources management system to 'bounce back', i.e. absorb and then recover from a water deficit event, restoring the normal system operation. Ten alternative metrics are proposed and analysed addressing a range of different resilience aspects including duration, magnitude, frequency and volume of related water deficit events. The metrics were analysed on a real-world case study of the Bristol Water supply system in the UK and compared with current practice. The analyses included an examination of metrics' sensitivity and correlation, as well as a detailed examination into the behaviour of metrics during water deficit periods. The results obtained suggest that multiple metrics which cover different aspects of resilience should be used simultaneously when assessing the resilience of a water resources management system, leading to a more complete understanding of resilience compared with current practice approaches. It was also observed that calculating the total duration of a water deficit period provided a clearer and more consistent indication of system performance compared to splitting the deficit periods into the time to reach and time to recover from the worst deficit events.

  20. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    PubMed

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures

  1. On Railroad Tank Car Puncture Performance: Part I - Considering Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the first in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perform...

  2. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  3. Metrics for Energy Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul E. Roege; Zachary A. Collier; James Mancillas

    2014-09-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today?s energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performancemore » measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system?s energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth.« less

  4. Relationship between intraoperative non-technical performance and technical events in bariatric surgery.

    PubMed

    Fecso, A B; Kuzulugil, S S; Babaoglu, C; Bener, A B; Grantcharov, T P

    2018-03-30

    The operating theatre is a unique environment with complex team interactions, where technical and non-technical performance affect patient outcomes. The correlation between technical and non-technical performance, however, remains underinvestigated. The purpose of this study was to explore these interactions in the operating theatre. A prospective single-centre observational study was conducted at a tertiary academic medical centre. One surgeon and three fellows participated as main operators. All patients who underwent a laparoscopic Roux-en-Y gastric bypass and had the procedures captured using the Operating Room Black Box ® platform were included. Technical assessment was performed using the Objective Structured Assessment of Technical Skills and Generic Error Rating Tool instruments. For non-technical assessment, the Non-Technical Skills for Surgeons (NOTSS) and Scrub Practitioners' List of Intraoperative Non-Technical Skills (SPLINTS) tools were used. Spearman rank-order correlation and N-gram statistics were conducted. Fifty-six patients were included in the study and 90 procedural steps (gastrojejunostomy and jejunojejunostomy) were analysed. There was a moderate to strong correlation between technical adverse events (r s  = 0·417-0·687), rectifications (r s  = 0·380-0·768) and non-technical performance of the surgical and nursing teams (NOTSS and SPLINTS). N-gram statistics showed that after technical errors, events and prior rectifications, the staff surgeon and the scrub nurse exhibited the most positive non-technical behaviours, irrespective of operator (staff surgeon or fellow). This study demonstrated that technical and non-technical performances are related, on both an individual and a team level. Valuable data can be obtained around intraoperative errors, events and rectifications. © 2018 BJS Society Ltd Published by John Wiley & Sons Ltd.

  5. Greenroads : a sustainability performance metric for roadway design and construction.

    DOT National Transportation Integrated Search

    2009-11-01

    Greenroads is a performance metric for quantifying sustainable practices associated with roadway design and construction. Sustainability is defined as having seven key components: ecology, equity, economy, extent, expectations, experience and exposur...

  6. Methodology to Calculate the ACE and HPQ Metrics Used in the Wave Energy Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R; Weber, Jochem W; Jenne, Dale S

    The U.S. Department of Energy's Wave Energy Prize Competition encouraged the development of innovative deep-water wave energy conversion technologies that at least doubled device performance above the 2014 state of the art. Because levelized cost of energy (LCOE) metrics are challenging to apply equitably to new technologies where significant uncertainty exists in design and operation, the prize technical team developed a reduced metric as proxy for LCOE, which provides an equitable comparison of low technology readiness level wave energy converter (WEC) concepts. The metric is called 'ACE' which is short for the ratio of the average climate capture width tomore » the characteristic capital expenditure. The methodology and application of the ACE metric used to evaluate the performance of the technologies that competed in the Wave Energy Prize are explained in this report.« less

  7. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    NASA Technical Reports Server (NTRS)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  8. Impact of Immediate Interpretation of Screening Tomosynthesis Mammography on Performance Metrics.

    PubMed

    Winkler, Nicole S; Freer, Phoebe; Anzai, Yoshimi; Hu, Nan; Stein, Matthew

    2018-05-07

    This study aimed to compare performance metrics for immediate and delayed batch interpretation of screening tomosynthesis mammograms. This HIPAA compliant study was approved by institutional review board with a waiver of consent. A retrospective analysis of screening performance metrics for tomosynthesis mammograms interpreted in 2015 when mammograms were read immediately was compared to historical controls from 2013 to 2014 when mammograms were batch interpreted after the patient had departed. A total of 5518 screening tomosynthesis mammograms (n = 1212 for batch interpretation and n = 4306 for immediate interpretation) were evaluated. The larger sample size for the latter group reflects a group practice shift to performing tomosynthesis for the majority of patients. Age, breast density, comparison examinations, and high-risk status were compared. An asymptotic proportion test and multivariable analysis were used to compare performance metrics. There was no statistically significant difference in recall or cancer detection rates for the batch interpretation group compared to immediate interpretation group with respective recall rate of 6.5% vs 5.3% = +1.2% (95% confidence interval -0.3 to 2.7%; P = .101) and cancer detection rate of 6.6 vs 7.2 per thousand = -0.6 (95% confidence interval -5.9 to 4.6; P = .825). There was no statistically significant difference in positive predictive values (PPVs) including PPV1 (screening recall), PPV2 (biopsy recommendation), or PPV 3 (biopsy performed) with batch interpretation (10.1%, 42.1%, and 40.0%, respectively) and immediate interpretation (13.6%, 39.2%, and 39.7%, respectively). After adjusting for age, breast density, high-risk status, and comparison mammogram, there was no difference in the odds of being recalled or cancer detection between the two groups. There is no statistically significant difference in interpretation performance metrics for screening tomosynthesis mammograms interpreted

  9. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less

  10. Performance evaluation of no-reference image quality metrics for face biometric images

    NASA Astrophysics Data System (ADS)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  11. Performance metrics for the assessment of satellite data products: an ocean color case study

    EPA Science Inventory

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...

  12. On Railroad Tank Car Puncture Performance: Part II - Estimating Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the second in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perfor...

  13. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  14. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  15. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  16. Metrics for the technical performance evaluation of light water reactor accident-tolerant fuel

    DOE PAGES

    Bragg-Sitton, Shannon M.; Todosow, Michael; Montgomery, Robert; ...

    2017-03-26

    The safe, reliable, and economic operation of the nation’s nuclear power reactor fleet has always been a top priority for the nuclear industry. Continual improvement of technology, including advanced materials and nuclear fuels, remains central to the industry’s success. Enhancing the accident tolerance of light water reactors (LWRs) became a topic of serious discussion following the 2011 Great East Japan Earthquake, resulting tsunami, and subsequent damage to the Fukushima Daiichi nuclear power plant complex. The overall goal for the development of accident-tolerant fuel (ATF) for LWRs is to identify alternative fuel system technologies to further enhance the safety, competitiveness, andmore » economics of commercial nuclear power. Designed for use in the current fleet of commercial LWRs or in reactor concepts with design certifications (GEN-III+), fuels with enhanced accident tolerance would endure loss of active cooling in the reactor core for a considerably longer period of time than the current fuel system while maintaining or improving performance during normal operations. The complex multiphysics behavior of LWR nuclear fuel in the integrated reactor system makes defining specific material or design improvements difficult; as such, establishing desirable performance attributes is critical in guiding the design and development of fuels and cladding with enhanced accident tolerance. Research and development of ATF in the United States is conducted under the U.S. Department of Energy (DOE) Fuel Cycle Research and Development Advanced Fuels Campaign. The DOE is sponsoring multiple teams to develop ATF concepts within multiple national laboratories, universities, and the nuclear industry. Concepts under investigation offer both evolutionary and revolutionary changes to the current nuclear fuel system. This study summarizes the technical evaluation methodology proposed in the United States to aid in the optimization and prioritization of candidate ATF

  17. Performance metrics for the assessment of satellite data products: an ocean color case study

    PubMed Central

    Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy

    2018-01-01

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296

  18. The SI Metric System and Practical Applications.

    ERIC Educational Resources Information Center

    Carney, Richard W.

    Intended for use in the technical program of a technical institute or community college, this student manual is designed to provide background in the metric system contributing to employability. Nine units are presented with objectives stated for each unit followed by questions or exercises. (Printed answers are supplied when necessary.) Unit 1…

  19. Effect of quality metric monitoring and colonoscopy performance.

    PubMed

    Razzak, Anthony; Smith, Dineen; Zahid, Maliha; Papachristou, Georgios; Khalid, Asif

    2016-10-01

    Background and aims: Adenoma detection rate (ADR) and cecal withdrawal time (CWT) have been identified as measures of colonoscopy quality. This study evaluates the impact of monitoring these measures on provider performance. Methods: Six blinded gastroenterologists practicing at a Veterans Affairs Medical Center were prospectively monitored over 9 months. Data for screening, adenoma surveillance, and fecal occult blood test positive (FOBT +) indicated colonoscopies were obtained, including exam preparation quality, cecal intubation rate, CWT, ADR, adenomas per colonoscopy (APC), and adverse events. Metrics were continuously monitored after a period of informed CWT monitoring and informed CWT + ADR monitoring. The primary outcome was impact on ADR and APC. Results: A total of 1671 colonoscopies were performed during the study period with 540 before informed monitoring, 528 during informed CWT monitoring, and 603 during informed CWT + ADR monitoring. No statistically significant impact on ADR was noted across each study phase. Multivariate regression revealed a trend towards fewer adenomas removed during the CWT monitoring phase (OR = 0.79; 95 %CI 0.62 - 1.02, P  = 0.065) and a trend towards more adenomas removed during the CWT + ADR monitoring phase when compared to baseline (OR = 1.26; 95 %CI 0.99 - 1.61, P  = 0.062). Indication for examination and provider were significant predictors for higher APC. Provider-specific data demonstrated a direct relationship between high ADR performers and increased CWT. Conclusions: Monitoring quality metrics did not significantly alter colonoscopy performance across a small heterogeneous group of providers. Non-significant trends towards higher APC were noted with CWT + ADR monitoring. Providers with a longer CWT had a higher ADR. Further studies are needed to determine the impact of monitoring on colonoscopy performance.

  20. Proposed Performance-Based Metrics for the Future Funding of Graduate Medical Education: Starting the Conversation.

    PubMed

    Caverzagie, Kelly J; Lane, Susan W; Sharma, Niraj; Donnelly, John; Jaeger, Jeffrey R; Laird-Fick, Heather; Moriarty, John P; Moyer, Darilyn V; Wallach, Sara L; Wardrop, Richard M; Steinmann, Alwin F

    2017-12-12

    Graduate medical education (GME) in the United States is financed by contributions from both federal and state entities that total over $15 billion annually. Within institutions, these funds are distributed with limited transparency to achieve ill-defined outcomes. To address this, the Institute of Medicine convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society's current and future needs. The resulting report provided several recommendations regarding the oversight and mechanisms of GME funding, including implementation of performance-based GME payments, but did not provide specific details about the content and development of metrics for these payments. To initiate a national conversation about performance-based GME funding, the authors asked: What should GME be held accountable for in exchange for public funding? In answer to this question, the authors propose 17 potential performance-based metrics for GME funding that could inform future funding decisions. Eight of the metrics are described as exemplars to add context and to help readers obtain a deeper understanding of the inherent complexities of performance-based GME funding. The authors also describe considerations and precautions for metric implementation.

  1. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  2. Performance of a normalized energy metric without jammer state information for an FH/MFSK system in worst case partial band jamming

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.

  3. Specification and implementation of IFC based performance metrics to support building life cycle assessment of hybrid energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrissey, Elmer; O'Donnell, James; Keane, Marcus

    2004-03-29

    Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less

  4. Relationship between non-technical skills and technical performance during cardiopulmonary resuscitation: does stress have an influence?

    PubMed Central

    Krage, Ralf; Zwaan, Laura; Tjon Soei Len, Lian; Kolenbrander, Mark W; van Groeningen, Dick; Loer, Stephan A; Wagner, Cordula; Schober, Patrick

    2017-01-01

    Background Non-technical skills, such as task management, leadership, situational awareness, communication and decision-making refer to cognitive, behavioural and social skills that contribute to safe and efficient team performance. The importance of these skills during cardiopulmonary resuscitation (CPR) is increasingly emphasised. Nonetheless, the relationship between non-technical skills and technical performance is poorly understood. We hypothesise that non-technical skills become increasingly important under stressful conditions when individuals are distracted from their tasks, and investigated the relationship between non-technical and technical skills under control conditions and when external stressors are present. Methods In this simulator-based randomised cross-over study, 30 anaesthesiologists and anaesthesia residents from the VU University Medical Center, Amsterdam, the Netherlands, participated in two different CPR scenarios in random order. In one scenario, external stressors (radio noise and a distractive scripted family member) were added, while the other scenario without stressors served as control condition. Non-technical performance of the team leader and technical performance of the team were measured using the ‘Anaesthetists’ Non-technical Skill’ score and a recently developed technical skills score. Analysis of variance and Pearson correlation coefficients were used for statistical analyses. Results Non-technical performance declined when external stressors were present (adjusted mean difference 3.9 points, 95% CI 2.4 to 5.5 points). A significant correlation between non-technical and technical performance scores was observed when external stressors were present (r=0.67, 95% CI 0.40 to 0.83, p<0.001), while no evidence for such a relationship was observed under control conditions (r=0.15, 95% CI −0.22 to 0.49, p=0.42). This was equally true for all individual domains of the non-technical performance score (task management, team

  5. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  6. Relationship between non-technical skills and technical performance during cardiopulmonary resuscitation: does stress have an influence?

    PubMed

    Krage, Ralf; Zwaan, Laura; Tjon Soei Len, Lian; Kolenbrander, Mark W; van Groeningen, Dick; Loer, Stephan A; Wagner, Cordula; Schober, Patrick

    2017-11-01

    Non-technical skills, such as task management, leadership, situational awareness, communication and decision-making refer to cognitive, behavioural and social skills that contribute to safe and efficient team performance. The importance of these skills during cardiopulmonary resuscitation (CPR) is increasingly emphasised. Nonetheless, the relationship between non-technical skills and technical performance is poorly understood. We hypothesise that non-technical skills become increasingly important under stressful conditions when individuals are distracted from their tasks, and investigated the relationship between non-technical and technical skills under control conditions and when external stressors are present. In this simulator-based randomised cross-over study, 30 anaesthesiologists and anaesthesia residents from the VU University Medical Center, Amsterdam, the Netherlands, participated in two different CPR scenarios in random order. In one scenario, external stressors (radio noise and a distractive scripted family member) were added, while the other scenario without stressors served as control condition. Non-technical performance of the team leader and technical performance of the team were measured using the 'Anaesthetists' Non-technical Skill' score and a recently developed technical skills score. Analysis of variance and Pearson correlation coefficients were used for statistical analyses. Non-technical performance declined when external stressors were present (adjusted mean difference 3.9 points, 95% CI 2.4 to 5.5 points). A significant correlation between non-technical and technical performance scores was observed when external stressors were present (r=0.67, 95% CI 0.40 to 0.83, p<0.001), while no evidence for such a relationship was observed under control conditions (r=0.15, 95% CI -0.22 to 0.49, p=0.42). This was equally true for all individual domains of the non-technical performance score (task management, team working, situation awareness, decision

  7. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

    PubMed

    Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

    2017-08-01

    This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Design and Implementation of Performance Metrics for Evaluation of Assessments Data

    ERIC Educational Resources Information Center

    Ahmed, Irfan; Bhatti, Arif

    2016-01-01

    Evocative evaluation of assessment data is essential to quantify the achievements at course and program levels. The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes at the course levels for program accreditation. Even though…

  9. Bayesian performance metrics and small system integration in recent homeland security and defense applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas

    2010-04-01

    In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.

  10. The Future Cybersecurity Workforce: Going Beyond Technical Skills for Successful Cyber Performance

    PubMed Central

    Dawson, Jessica; Thomson, Robert

    2018-01-01

    One of the challenges in writing an article reviewing the current state of cyber education and workforce development is that there is a paucity of quantitative assessment regarding the cognitive aptitudes, work roles, or team organization required by cybersecurity professionals to be successful. In this review, we argue that the people who operate within the cyber domain need a combination of technical skills, domain specific knowledge, and social intelligence to be successful. They, like the networks they operate, must also be reliable, trustworthy, and resilient. Defining the knowledge, skills, attributes, and other characteristics is not as simple as defining a group of technical skills that people can be trained on; the complexity of the cyber domain makes this a unique challenge. There has been little research devoted to exactly what attributes individuals in the cyber domain need. What research does exist places an emphasis on technical and engineering skills while discounting the important social and organizational influences that dictate success or failure in everyday settings. This paper reviews the literature on cyber expertise and cyber workforce development to identify gaps and then argues for the important contribution of social fit in the highly complex and heterogenous cyber workforce. We then identify six assumptions for the future of cybersecurity workforce development, including the requirement for systemic thinkers, team players, a love for continued learning, strong communication ability, a sense of civic duty, and a blend of technical and social skill. Finally, we make recommendations for social and cognitive metrics which may be indicative of future performance in cyber work roles to provide a roadmap for future scholars. PMID:29946276

  11. The Future Cybersecurity Workforce: Going Beyond Technical Skills for Successful Cyber Performance.

    PubMed

    Dawson, Jessica; Thomson, Robert

    2018-01-01

    One of the challenges in writing an article reviewing the current state of cyber education and workforce development is that there is a paucity of quantitative assessment regarding the cognitive aptitudes, work roles, or team organization required by cybersecurity professionals to be successful. In this review, we argue that the people who operate within the cyber domain need a combination of technical skills, domain specific knowledge, and social intelligence to be successful. They, like the networks they operate, must also be reliable, trustworthy, and resilient. Defining the knowledge, skills, attributes, and other characteristics is not as simple as defining a group of technical skills that people can be trained on; the complexity of the cyber domain makes this a unique challenge. There has been little research devoted to exactly what attributes individuals in the cyber domain need. What research does exist places an emphasis on technical and engineering skills while discounting the important social and organizational influences that dictate success or failure in everyday settings. This paper reviews the literature on cyber expertise and cyber workforce development to identify gaps and then argues for the important contribution of social fit in the highly complex and heterogenous cyber workforce. We then identify six assumptions for the future of cybersecurity workforce development, including the requirement for systemic thinkers, team players, a love for continued learning, strong communication ability, a sense of civic duty, and a blend of technical and social skill. Finally, we make recommendations for social and cognitive metrics which may be indicative of future performance in cyber work roles to provide a roadmap for future scholars.

  12. Simulation for the training of human performance and technical skills: the intersection of how we will train health care professionals in the future.

    PubMed

    Hamman, William R; Beaubien, Jeffrey M; Beaudin-Seiler, Beth M

    2009-12-01

    The aims of this research are to begin to understand health care teams in their operational environment, establish metrics of performance for these teams, and validate a series of scenarios in simulation that elicit team and technical skills. The focus is on defining the team model that will function in the operational environment in which health care professionals work. Simulations were performed across the United States in 70- to 1000-bed hospitals. Multidisciplinary health care teams analyzed more than 300 hours of videos of health care professionals performing simulations of team-based medical care in several different disciplines. Raters were trained to enhance inter-rater reliability. The study validated event sets that trigger team dynamics and established metrics for team-based care. Team skills were identified and modified using simulation scenarios that employed the event-set-design process. Specific skills (technical and team) were identified by criticality measurement and task analysis methodology. In situ simulation, which includes a purposeful and Socratic Method of debriefing, is a powerful intervention that can overcome inertia found in clinician behavior and latent environmental systems that present a challenge to quality and patient safety. In situ simulation can increase awareness of risks, personalize the risks, and encourage the reflection, effort, and attention needed to make changes to both behaviors and to systems.

  13. Early Warning Look Ahead Metrics: The Percent Milestone Backlog Metric

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Anderson, Timothy P.

    2017-01-01

    All complex development projects experience delays and corresponding backlogs of their project control milestones during their acquisition lifecycles. NASA Goddard Space Flight Center (GSFC) Flight Projects Directorate (FPD) teamed with The Aerospace Corporation (Aerospace) to develop a collection of Early Warning Look Ahead metrics that would provide GSFC leadership with some independent indication of the programmatic health of GSFC flight projects. As part of the collection of Early Warning Look Ahead metrics, the Percent Milestone Backlog metric is particularly revealing, and has utility as a stand-alone execution performance monitoring tool. This paper describes the purpose, development methodology, and utility of the Percent Milestone Backlog metric. The other four Early Warning Look Ahead metrics are also briefly discussed. Finally, an example of the use of the Percent Milestone Backlog metric in providing actionable insight is described, along with examples of its potential use in other commodities.

  14. Assessment of various supervised learning algorithms using different performance metrics

    NASA Astrophysics Data System (ADS)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  15. Metrication and AIHA.

    PubMed

    Burnett, R D

    1977-05-01

    AIHA supports a planned orderly national program for conversion to the metric system and will cooperate with other technical societies and organizations in implementing this voluntary conversion. The Association will use the International System of Units (SI) as modified by the Secretary of Commerce for use in the United States in all official publications, papers and documents. U.S. customary units can be presented in parentheses following the appropriate SI unit, when it is necessary for clarity.

  16. A novel patient-centered "intention-to-treat" metric of U.S. lung transplant center performance.

    PubMed

    Maldonado, Dawn A; RoyChoudhury, Arindam; Lederer, David J

    2018-01-01

    Despite the importance of pretransplantation outcomes, 1-year posttransplantation survival is typically considered the primary metric of lung transplant center performance in the United States. We designed a novel lung transplant center performance metric that incorporates both pre- and posttransplantation survival time. We performed an ecologic study of 12 187 lung transplant candidates listed at 56 U.S. lung transplant centers between 2006 and 2012. We calculated an "intention-to-treat" survival (ITTS) metric as the percentage of waiting list candidates surviving at least 1 year after transplantation. The median center-level 1-year posttransplantation survival rate was 84.1%, and the median center-level ITTS was 66.9% (mean absolute difference 19.6%, 95% limits of agreement 4.3 to 35.1%). All but 10 centers had ITTS values that were significantly lower than 1-year posttransplantation survival rates. Observed ITTS was significantly lower than expected ITTS for 7 centers. These data show that one third of lung transplant candidates do not survive 1 year after transplantation, and that 12% of centers have lower than expected ITTS. An "intention-to-treat" survival metric may provide a more realistic expectation of patient outcomes at transplant centers and may be of value to transplant centers and policymakers. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  17. Performation Metrics Development Analysis for Information and Communications Technology Outsourcing: A Case Study

    ERIC Educational Resources Information Center

    Travis, James L., III

    2014-01-01

    This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…

  18. Spectrum splitting metrics and effect of filter characteristics on photovoltaic system performance.

    PubMed

    Russo, Juan M; Zhang, Deming; Gordon, Michael; Vorndran, Shelby; Wu, Yuechen; Kostuk, Raymond K

    2014-03-10

    During the past few years there has been a significant interest in spectrum splitting systems to increase the overall efficiency of photovoltaic solar energy systems. However, methods for comparing the performance of spectrum splitting systems and the effects of optical spectral filter design on system performance are not well developed. This paper addresses these two areas. The system conversion efficiency is examined in detail and the role of optical spectral filters with respect to the efficiency is developed. A new metric termed the Improvement over Best Bandgap is defined which expresses the efficiency gain of the spectrum splitting system with respect to a similar system that contains the highest constituent single bandgap photovoltaic cell. This parameter indicates the benefit of using the more complex spectrum splitting system with respect to a single bandgap photovoltaic system. Metrics are also provided to assess the performance of experimental spectral filters in different spectrum splitting configurations. The paper concludes by using the methodology to evaluate spectrum splitting systems with different filter configurations and indicates the overall efficiency improvement that is possible with ideal and experimental designs.

  19. Approaches to Cycle Analysis and Performance Metrics

    NASA Technical Reports Server (NTRS)

    Parson, Daniel E.

    2003-01-01

    The following notes were prepared as part of an American Institute of Aeronautics and Astronautics (AIAA) sponsored short course entitled Air Breathing Pulse Detonation Engine (PDE) Technology. The course was presented in January of 2003, and again in July of 2004 at two different AIAA meetings. It was taught by seven instructors, each of whom provided information on particular areas of PDE research. These notes cover two areas. The first is titled Approaches to Cycle Analysis and Performance Metrics. Here, the various methods of cycle analysis are introduced. These range from algebraic, thermodynamic equations, to single and multi-dimensional Computational Fluid Dynamic (CFD) solutions. Also discussed are the various means by which performance is measured, and how these are applied in a device which is fundamentally unsteady. The second topic covered is titled PDE Hybrid Applications. Here the concept of coupling a PDE to a conventional turbomachinery based engine is explored. Motivation for such a configuration is provided in the form of potential thermodynamic benefits. This is accompanied by a discussion of challenges to the technology.

  20. 48 CFR 216.402-2 - Technical performance incentives.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Technical performance incentives. 216.402-2 Section 216.402-2 Federal Acquisition Regulations System DEFENSE ACQUISITION... Contracts 216.402-2 Technical performance incentives. See PGI 216.402-2 for guidance on establishing...

  1. Full immersion simulation: validation of a distributed simulation environment for technical and non-technical skills training in Urology.

    PubMed

    Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter

    2015-07-01

    To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  2. Develop metrics of tire debris on Texas highways : technical report.

    DOT National Transportation Integrated Search

    2017-05-01

    This research effort estimated the amount, characteristics, costs, and safety implications of tire debris on Texas highways. The metrics developed by this research are based on several sources of data, including a statewide survey of debris removal p...

  3. Bayesian performance metrics of binary sensors in homeland security applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Forrester, Thomas C.

    2008-04-01

    Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.

  4. Performance metric comparison study for non-magnetic bi-stable energy harvesters

    NASA Astrophysics Data System (ADS)

    Udani, Janav P.; Wrigley, Cailin; Arrieta, Andres F.

    2017-04-01

    Energy harvesting employing non-linear systems offers considerable advantages over linear systems given the broadband resonant response which is favorable for applications involving diverse input vibrations. In this respect, the rich dynamics of bi-stable systems present a promising means for harvesting vibrational energy from ambient sources. Harvesters deriving their bi-stability from thermally induced stresses as opposed to magnetic forces are receiving significant attention as it reduces the need for ancillary components and allows for bio- compatible constructions. However, the design of these bi-stable harvesters still requires further optimization to completely exploit the dynamic behavior of these systems. This study presents a comparison of the harvesting capabilities of non-magnetic, bi-stable composite laminates under variations in the design parameters as evaluated utilizing established power metrics. Energy output characteristics of two bi-stable composite laminate plates with a piezoelectric patch bonded on the top surface are experimentally investigated for variations in the thickness ratio and inertial mass positions for multiple load conditions. A particular design configuration is found to perform better over the entire range of testing conditions which include single and multiple frequency excitation, thus indicating that design optimization over the geometry of the harvester yields robust performance. The experimental analysis further highlights the need for appropriate design guidelines for optimization and holistic performance metrics to account for the range of operational conditions.

  5. SI (Metric) handbook

    NASA Technical Reports Server (NTRS)

    Artusa, Elisa A.

    1994-01-01

    This guide provides information for an understanding of SI units, symbols, and prefixes; style and usage in documentation in both the US and in the international business community; conversion techniques; limits, fits, and tolerance data; and drawing and technical writing guidelines. Also provided is information of SI usage for specialized applications like data processing and computer programming, science, engineering, and construction. Related information in the appendixes include legislative documents, historical and biographical data, a list of metric documentation, rules for determining significant digits and rounding, conversion factors, shorthand notation, and a unit index.

  6. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  7. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    PubMed

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)performance for the medium-high level, and two laboratories showed an acceptable performance for the high level. When calculated against the Ensuring the Quality of UI Procedures (EQUIP) TEas, the performance of all laboratories was≤2.49 Sigma metrics at all concentrations. Only one laboratory had TEcalcperformance for the iodine deficiency levels and variable performance at other concentrations according to different TEas.

  8. Target detection cycle criteria when using the targeting task performance metric

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Jacobs, Eddie L.; Vollmerhausen, Richard H.

    2004-12-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate of the US Army (NVESD) has developed a new target acquisition metric to better predict the performance of modern electro-optical imagers. The TTP metric replaces the Johnson criteria. One problem with transitioning to the new model is that the difficulty of searching in a terrain has traditionally been quantified by an "N50." The N50 is the number of Johnson criteria cycles needed for the observer to detect the target half the time, assuming that the observer is not time limited. In order to make use of this empirical data base, a conversion must be found relating Johnson cycles for detection to TTP cycles for detection. This paper describes how that relationship is established. We have found that the relationship between Johnson and TTP is 1:2.7 for the recognition and identification tasks.

  9. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1991-01-01

    NASA's principal metrication accomplishments for FY 1990 were establishment of metrication policy for major programs, development of an implementing instruction for overall metric policy and initiation of metrication planning for the major program offices. In FY 1991, development of an overall NASA plan and individual program office plans will be completed, requirement assessments will be performed for all support areas, and detailed assessment and transition planning will be undertaken at the institutional level. Metric feasibility decisions on a number of major programs are expected over the next 18 months.

  10. A novel ECG detector performance metric and its relationship with missing and false heart rate limit alarms.

    PubMed

    Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G

    Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.

  11. DEVELOPMENT OF METRICS FOR TECHNICAL PRODUCTION: QUALIS BOOKS AND BOOK CHAPTERS.

    PubMed

    Ribas-Filho, Jurandir Marcondes; Malafaia, Osvaldo; Czeczko, Nicolau Gregori; Ribas, Carmen A P Marcondes; Nassif, Paulo Afonso Nunes

    2015-01-01

    To propose metrics to qualify the publication in books and chapters, and from there, establish guidance for the evaluation of the Medicine III programs. Analysis of some of the 2013 area documents focusing this issue. Were analyzed the following areas: Computer Science; Biotechnology; Biological Sciences I; Public Health; Medicine I. Except for the Medicine I, which has not adopted the metric for books and chapters, all other programs established metrics within the intellectual production, although with unequal percentages. It´s desirable to include metrics for books and book chapters in the intellectual production of post-graduate programs in Area Document with percentage-value of 5% in publications of Medicine III programs. Propor a métrica para qualificar a produção veiculada através de livros e capítulos e, a partir daí, estabelecer orientação para a avaliação dos programas de pós-graduação da Medicina III. Análise dos documentos de área de 2013 dos programas de pós-graduação senso estrito das áreas: Ciência da Computação; Biotecnologia; Ciências Biológicas I; Saúde Coletiva; Medicina I. Excetuando-se o programa da Medicina I, que não adotou a métrica para classificação de livros e capítulos, todos os demais estabeleceram-na dentro da sua produção intelectual, embora com percentuais desiguais. É desejável inserir a métrica de livros e capitulos de livros na produção intelectual do Documento de Área dos programas, ortorgando a ela percentual de 5% das publicações qualificadas dos programas da Medicina III.

  12. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  13. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound-guided prostate interventions.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C

    2018-04-01

    Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.

  14. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto

    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less

  15. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  16. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    PubMed

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This

  17. A Correlation Between Quality Management Metrics and Technical Performance Measurement

    DTIC Science & Technology

    2007-03-01

    Engineering Working Group SME Subject Matter Expert SoS System of Systems SPI Schedule performance Index SSEI System of Systems Engineering and...and stated as such [Q, M , M &G]. The QMM equation is given by: 12 QMM=0.92RQM+0.67EPM+0.55RKM+1.86PM, where: RGM is the requirements management...schedule. Now if corrective action is not taken, the project/task will be completed behind schedule and over budget. m . As well as the derived

  18. Software metrics: The key to quality software on the NCC project

    NASA Technical Reports Server (NTRS)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  19. Technical performance and match-to-match variation in elite football teams.

    PubMed

    Liu, Hongyou; Gómez, Miguel-Angel; Gonçalves, Bruno; Sampaio, Jaime

    2016-01-01

    Recent research suggests that match-to-match variation adds important information to performance descriptors in team sports, as it helps measure how players fine-tune their tactical behaviours and technical actions to the extreme dynamical environments. The current study aims to identify the differences in technical performance of players from strong and weak teams and to explore match-to-match variation of players' technical match performance. Performance data of all the 380 matches of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Twenty-one performance-related match actions and events were chosen as variables in the analyses. Players' technical performance profiles were established by unifying count values of each action or event of each player per match into the same scale. Means of these count values of players from Top3 and Bottom3 teams were compared and plotted into radar charts. Coefficient of variation of each match action or event within a player was calculated to represent his match-to-match variation of technical performance. Differences in the variation of technical performances of players across different match contexts (team and opposition strength, match outcome and match location) were compared. All the comparisons were achieved by the magnitude-based inferences. Results showed that technical performances differed between players of strong and weak teams from different perspectives across different field positions. Furthermore, the variation of the players' technical performance is affected by the match context, with effects from team and opposition strength greater than effects from match location and match outcome.

  20. 48 CFR 1816.402-270 - NASA technical performance incentives.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false NASA technical performance incentives. 1816.402-270 Section 1816.402-270 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND....402-270 NASA technical performance incentives. (a) Pursuant to the guidelines in 1816.402, NASA has...

  1. 48 CFR 1816.402-270 - NASA technical performance incentives.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true NASA technical performance incentives. 1816.402-270 Section 1816.402-270 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND....402-270 NASA technical performance incentives. (a) Pursuant to the guidelines in 1816.402, NASA has...

  2. 48 CFR 1816.402-270 - NASA technical performance incentives.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false NASA technical performance incentives. 1816.402-270 Section 1816.402-270 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND....402-270 NASA technical performance incentives. (a) Pursuant to the guidelines in 1816.402, NASA has...

  3. 48 CFR 1816.402-270 - NASA technical performance incentives.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false NASA technical performance incentives. 1816.402-270 Section 1816.402-270 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND....402-270 NASA technical performance incentives. (a) Pursuant to the guidelines in 1816.402, NASA has...

  4. 48 CFR 1816.402-270 - NASA technical performance incentives.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false NASA technical performance incentives. 1816.402-270 Section 1816.402-270 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND....402-270 NASA technical performance incentives. (a) Pursuant to the guidelines in 1816.402, NASA has...

  5. Highway safety performance metrics and emergency response in an advanced transportation environment : final report.

    DOT National Transportation Integrated Search

    2016-06-01

    Traditional highway safety performance metrics have been largely based on fatal crashes and more recently serious injury crashes. In the near future however, there may be less severe motor vehicle crashes due to advances in driver assistance systems,...

  6. Changing Metrics of Organ Procurement Organization Performance in Order to Increase Organ Donation Rates in the United States.

    PubMed

    Goldberg, D; Kallan, M J; Fu, L; Ciccarone, M; Ramirez, J; Rosenberg, P; Arnold, J; Segal, G; Moritsugu, K P; Nathan, H; Hasz, R; Abt, P L

    2017-12-01

    The shortage of deceased-donor organs is compounded by donation metrics that fail to account for the total pool of possible donors, leading to ambiguous donor statistics. We sought to assess potential metrics of organ procurement organizations (OPOs) utilizing data from the Nationwide Inpatient Sample (NIS) from 2009-2012 and State Inpatient Databases (SIDs) from 2008-2014. A possible donor was defined as a ventilated inpatient death ≤75 years of age, without multi-organ system failure, sepsis, or cancer, whose cause of death was consistent with organ donation. These estimates were compared to patient-level data from chart review from two large OPOs. Among 2,907,658 inpatient deaths from 2009-2012, 96,028 (3.3%) were a "possible deceased-organ donor." The two proposed metrics of OPO performance were: (1) donation percentage (percentage of possible deceased-donors who become actual donors; range: 20.0-57.0%); and (2) organs transplanted per possible donor (range: 0.52-1.74). These metrics allow for comparisons of OPO performance and geographic-level donation rates, and identify areas in greatest need of interventions to improve donation rates. We demonstrate that administrative data can be used to identify possible deceased donors in the US and could be a data source for CMS to implement new OPO performance metrics in a standardized fashion. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  7. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  8. Light Water Reactor Sustainability Program Operator Performance Metrics for Control Room Modernization: A Practical Guide for Early Design Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Boring; Roger Lew; Thomas Ulrich

    2014-03-01

    As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less

  9. Vocational and Technical Education Performance Standards and Competencies.

    ERIC Educational Resources Information Center

    Connecticut State Board of Education, Hartford.

    These Connecticut vocational and technical performance standards and competencies are a guide for overall quality attainment in these seven vocational and technical program areas: agricultural science technology education; business and finance technology education; cooperative work education; family and consumer sciences education; marketing…

  10. Dose-volume metrics and their relation to memory performance in pediatric brain tumor patients: A preliminary study.

    PubMed

    Raghubar, Kimberly P; Lamba, Michael; Cecil, Kim M; Yeates, Keith Owen; Mahone, E Mark; Limke, Christina; Grosshans, David; Beckwith, Travis J; Ris, M Douglas

    2018-06-01

    Advances in radiation treatment (RT), specifically volumetric planning with detailed dose and volumetric data for specific brain structures, have provided new opportunities to study neurobehavioral outcomes of RT in children treated for brain tumor. The present study examined the relationship between biophysical and physical dose metrics and neurocognitive ability, namely learning and memory, 2 years post-RT in pediatric brain tumor patients. The sample consisted of 26 pediatric patients with brain tumor, 14 of whom completed neuropsychological evaluations on average 24 months post-RT. Prescribed dose and dose-volume metrics for specific brain regions were calculated including physical metrics (i.e., mean dose and maximum dose) and biophysical metrics (i.e., integral biological effective dose and generalized equivalent uniform dose). We examined the associations between dose-volume metrics (whole brain, right and left hippocampus), and performance on measures of learning and memory (Children's Memory Scale). Biophysical dose metrics were highly correlated with the physical metric of mean dose but not with prescribed dose. Biophysical metrics and mean dose, but not prescribed dose, correlated with measures of learning and memory. These preliminary findings call into question the value of prescribed dose for characterizing treatment intensity; they also suggest that biophysical dose has only a limited advantage compared to physical dose when calculated for specific regions of the brain. We discuss the implications of the findings for evaluating and understanding the relation between RT and neurocognitive functioning. © 2018 Wiley Periodicals, Inc.

  11. Surveillance metrics sensitivity study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  12. Surveillance Metrics Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  13. Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery.

    PubMed

    Jarc, Anthony M; Curet, Myriam J

    2017-03-01

    Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.

  14. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  15. 16 CFR 1401.5 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Providing performance and technical data to...: REQUIREMENTS TO PROVIDE THE COMMISSION WITH PERFORMANCE AND TECHNICAL DATA; REQUIREMENTS TO NOTIFY CONSUMERS AT POINT OF PURCHASE OF PERFORMANCE AND TECHNICAL DATA § 1401.5 Providing performance and technical data to...

  16. 16 CFR 1401.5 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Providing performance and technical data to...: REQUIREMENTS TO PROVIDE THE COMMISSION WITH PERFORMANCE AND TECHNICAL DATA; REQUIREMENTS TO NOTIFY CONSUMERS AT POINT OF PURCHASE OF PERFORMANCE AND TECHNICAL DATA § 1401.5 Providing performance and technical data to...

  17. 16 CFR 1401.5 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Providing performance and technical data to...: REQUIREMENTS TO PROVIDE THE COMMISSION WITH PERFORMANCE AND TECHNICAL DATA; REQUIREMENTS TO NOTIFY CONSUMERS AT POINT OF PURCHASE OF PERFORMANCE AND TECHNICAL DATA § 1401.5 Providing performance and technical data to...

  18. 16 CFR 1401.5 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Providing performance and technical data to...: REQUIREMENTS TO PROVIDE THE COMMISSION WITH PERFORMANCE AND TECHNICAL DATA; REQUIREMENTS TO NOTIFY CONSUMERS AT POINT OF PURCHASE OF PERFORMANCE AND TECHNICAL DATA § 1401.5 Providing performance and technical data to...

  19. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory

    PubMed Central

    Kumar, B. Vinodh; Mohan, Thuthi

    2018-01-01

    OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587

  20. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory.

    PubMed

    Kumar, B Vinodh; Mohan, Thuthi

    2018-01-01

    Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.

  1. Assessing colonoscopic inspection skill using a virtual withdrawal simulation: a preliminary validation of performance metrics.

    PubMed

    Zupanc, Christine M; Wallis, Guy M; Hill, Andrew; Burgess-Limerick, Robin; Riek, Stephan; Plooy, Annaliese M; Horswill, Mark S; Watson, Marcus O; de Visser, Hans; Conlan, David; Hewett, David G

    2017-07-12

    The effectiveness of colonoscopy for diagnosing and preventing colon cancer is largely dependent on the ability of endoscopists to fully inspect the colonic mucosa, which they achieve primarily through skilled manipulation of the colonoscope during withdrawal. Performance assessment during live procedures is problematic. However, a virtual withdrawal simulation can help identify and parameterise actions linked to successful inspection, and offer standardised assessments for trainees. Eleven experienced endoscopists and 18 endoscopy novices (medical students) completed a mucosal inspection task during three simulated colonoscopic withdrawals. The two groups were compared on 10 performance metrics to preliminarily assess the validity of these measures to describe inspection quality. Four metrics were related to aspects of polyp detection: percentage of polyp markers found; number of polyp markers found per minute; percentage of the mucosal surface illuminated by the colonoscope (≥0.5 s); and percentage of polyp markers illuminated (≥2.5 s) but not identified. A further six metrics described the movement of the colonoscope: withdrawal time; linear distance travelled by the colonoscope tip; total distance travelled by the colonoscope tip; and distance travelled by the colonoscope tip due to movement of the up/down angulation control, movement of the left/right angulation control, and axial shaft rotation. Statistically significant experienced-novice differences were found for 8 of the 10 performance metrics (p's < .005). Compared with novices, experienced endoscopists inspected more of the mucosa and detected more polyp markers, at a faster rate. Despite completing the withdrawals more quickly than the novices, the experienced endoscopists also moved the colonoscope more in terms of linear distance travelled and overall tip movement, with greater use of both the up/down angulation control and axial shaft rotation. However, the groups did not differ in the

  2. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics.

    PubMed

    Fero, Laura J; O'Donnell, John M; Zullo, Thomas G; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T; Hoffman, Leslie A

    2010-10-01

    This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation-based performance was rated as 'meeting' or 'not meeting' overall expectations. Test scores were categorized as strong, average, or weak. Most (75.0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0.277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0.001) using high-fidelity human simulation. The relationship between videotaped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer's V = 0.444, P = 0.029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer's V = 0.413, P = 0.047). Students' performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. © 2010 The Authors. Journal of Advanced

  3. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics

    PubMed Central

    Fero, Laura J.; O’Donnell, John M.; Zullo, Thomas G.; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T.; Hoffman, Leslie A.

    2018-01-01

    Aim This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Background Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. Methods In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation- based performance was rated as ‘meeting’ or ‘not meeting’ overall expectations. Test scores were categorized as strong, average, or weak. Results Most (75·0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0·277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0·001) using high-fidelity human simulation. The relationship between video-taped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer’s V = 0·444, P = 0·029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer’s V = 0·413, P = 0·047). Conclusion Students’ performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills

  4. A Single Conjunction Risk Assessment Metric: the F-Value

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan Clayton; Newman, Lauri K.

    2009-01-01

    The Conjunction Assessment Team at NASA Goddard Space Flight Center provides conjunction risk assessment for many NASA robotic missions. These risk assessments are based on several figures of merit, such as miss distance, probability of collision, and orbit determination solution quality. However, these individual metrics do not singly capture the overall risk associated with a conjunction, making it difficult for someone without this complete understanding to take action, such as an avoidance maneuver. The goal of this analysis is to introduce a single risk index metric that can easily convey the level of risk without all of the technical details. The proposed index is called the conjunction "F-value." This paper presents the concept of the F-value and the tuning of the metric for use in routine Conjunction Assessment operations.

  5. Benchmarking the performance of fixed-image receptor digital radiography systems. Part 2: system performance metric.

    PubMed

    Lee, Kam L; Bernardo, Michael; Ireland, Timothy A

    2016-06-01

    This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings.

  6. 16 CFR 1407.3 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Providing performance and technical data to... TECHNICAL DATA BY LABELING § 1407.3 Providing performance and technical data to purchasers by labeling. (a... technical data related to performance and safety to prospective purchasers of such products at the time of...

  7. 16 CFR 1407.3 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Providing performance and technical data to... TECHNICAL DATA BY LABELING § 1407.3 Providing performance and technical data to purchasers by labeling. (a... technical data related to performance and safety to prospective purchasers of such products at the time of...

  8. 16 CFR 1407.3 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Providing performance and technical data to... TECHNICAL DATA BY LABELING § 1407.3 Providing performance and technical data to purchasers by labeling. (a... technical data related to performance and safety to prospective purchasers of such products at the time of...

  9. 16 CFR 1407.3 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Providing performance and technical data to... TECHNICAL DATA BY LABELING § 1407.3 Providing performance and technical data to purchasers by labeling. (a... technical data related to performance and safety to prospective purchasers of such products at the time of...

  10. Making the Case for Objective Performance Metrics in Newborn Screening by Tandem Mass Spectrometry

    ERIC Educational Resources Information Center

    Rinaldo, Piero; Zafari, Saba; Tortorelli, Silvia; Matern, Dietrich

    2006-01-01

    The expansion of newborn screening programs to include multiplex testing by tandem mass spectrometry requires understanding and close monitoring of performance metrics. This is not done consistently because of lack of defined targets, and interlaboratory comparison is almost nonexistent. Between July 2004 and April 2006 (N = 176,185 cases), the…

  11. Applying Sigma Metrics to Reduce Outliers.

    PubMed

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Do Your Students Measure Up Metrically?

    ERIC Educational Resources Information Center

    Taylor, P. Mark; Simms, Ken; Kim, Ok-Kyeong; Reys, Robert E.

    2001-01-01

    Examines released metric items from the Third International Mathematics and Science Study (TIMSS) and the 3rd and 4th grade results. Recommends refocusing instruction on the metric system to improve student performance in measurement. (KHR)

  13. On the correlation between reservoir metrics and performance for time series classification under the influence of synaptic plasticity.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-01-01

    Reservoir computing provides a simpler paradigm of training recurrent networks by initialising and adapting the recurrent connections separately to a supervised linear readout. This creates a problem, though. As the recurrent weights and topology are now separated from adapting to the task, there is a burden on the reservoir designer to construct an effective network that happens to produce state vectors that can be mapped linearly into the desired outputs. Guidance in forming a reservoir can be through the use of some established metrics which link a number of theoretical properties of the reservoir computing paradigm to quantitative measures that can be used to evaluate the effectiveness of a given design. We provide a comprehensive empirical study of four metrics; class separation, kernel quality, Lyapunov's exponent and spectral radius. These metrics are each compared over a number of repeated runs, for different reservoir computing set-ups that include three types of network topology and three mechanisms of weight adaptation through synaptic plasticity. Each combination of these methods is tested on two time-series classification problems. We find that the two metrics that correlate most strongly with the classification performance are Lyapunov's exponent and kernel quality. It is also evident in the comparisons that these two metrics both measure a similar property of the reservoir dynamics. We also find that class separation and spectral radius are both less reliable and less effective in predicting performance.

  14. Setting Performance Standards for Technical and Nontechnical Competence in General Surgery.

    PubMed

    Szasz, Peter; Bonrath, Esther M; Louridas, Marisa; Fecso, Andras B; Howe, Brett; Fehr, Adam; Ott, Michael; Mack, Lloyd A; Harris, Kenneth A; Grantcharov, Teodor P

    2017-07-01

    The objectives of this study were to (1) create a technical and nontechnical performance standard for the laparoscopic cholecystectomy, (2) assess the classification accuracy and (3) credibility of these standards, (4) determine a trainees' ability to meet both standards concurrently, and (5) delineate factors that predict standard acquisition. Scores on performance assessments are difficult to interpret in the absence of established standards. Trained raters observed General Surgery residents performing laparoscopic cholecystectomies using the Objective Structured Assessment of Technical Skill (OSATS) and the Objective Structured Assessment of Non-Technical Skills (OSANTS) instruments, while as also providing a global competent/noncompetent decision for each performance. The global decision was used to divide the trainees into 2 contrasting groups and the OSATS or OSANTS scores were graphed per group to determine the performance standard. Parametric statistics were used to determine classification accuracy and concurrent standard acquisition, receiver operator characteristic (ROC) curves were used to delineate predictive factors. Thirty-six trainees were observed 101 times. The technical standard was an OSATS of 21.04/35.00 and the nontechnical standard an OSANTS of 22.49/35.00. Applying these standards, competent/noncompetent trainees could be discriminated in 94% of technical and 95% of nontechnical performances (P < 0.001). A 21% discordance between technically and nontechnically competent trainees was identified (P < 0.001). ROC analysis demonstrated case experience and trainee level were both able to predict achieving the standards with an area under the curve (AUC) between 0.83 and 0.96 (P < 0.001). The present study presents defensible standards for technical and nontechnical performance. Such standards are imperative to implementing summative assessments into surgical training.

  15. Patterns of Hospital Performance on the Hospital-Wide 30-Day Readmission Metric: Is the Playing Field Level?

    PubMed

    Hoyer, Erik H; Padula, William V; Brotman, Daniel J; Reid, Natalie; Leung, Curtis; Lepley, Diane; Deutschendorf, Amy

    2018-01-01

    Hospital performance on the 30-day hospital-wide readmission (HWR) metric as calculated by the Centers for Medicare and Medicaid Services (CMS) is currently reported as a quality measure. Focusing on patient-level factors may provide an incomplete picture of readmission risk at the hospital level to explain variations in hospital readmission rates. To evaluate and quantify hospital-level characteristics that track with hospital performance on the current HWR metric. Retrospective cohort study. A total of 4785 US hospitals. We linked publically available data on individual hospitals published by CMS on patient-level adjusted 30-day HWR rates from July 1, 2011, through June 30, 2014, to the 2014 American Hospital Association annual survey. Primary outcome was performance in the worst CMS-calculated HWR quartile. Primary hospital-level exposure variables were defined as: size (total number of beds), safety net status (top quartile of disproportionate share), academic status [member of the Association of American Medical Colleges (AAMC)], National Cancer Institute Comprehensive Cancer Center (NCI-CCC) status, and hospital services offered (e.g., transplant, hospice, emergency department). Multilevel regression was used to evaluate the association between 30-day HWR and the hospital-level factors. Hospital-level characteristics significantly associated with performing in the worst CMS-calculated HWR quartile included: safety net status [adjusted odds ratio (aOR) 1.99, 95% confidence interval (95% CI) 1.61-2.45, p < 0.001], large size (> 400 beds, aOR 1.42, 95% CI 1.07-1.90, p = 0.016), AAMC alone status (aOR 1.95, 95% CI 1.35-2.83, p < 0.001), and AAMC plus NCI-CCC status (aOR 5.16, 95% CI 2.58-10.31, p < 0.001). Hospitals with more critical care beds (aOR 1.26, 95% CI 1.02-1.56, p = 0.033), those with transplant services (aOR 2.80, 95% CI 1.48-5.31,p = 0.001), and those with emergency room services (aOR 3.37, 95% CI 1.12-10.15, p = 0.031) demonstrated

  16. Performance Metrics in Professional Baseball Pitchers before and after Surgical Treatment for Neurogenic Thoracic Outlet Syndrome.

    PubMed

    Thompson, Robert W; Dawkins, Corey; Vemuri, Chandu; Mulholland, Michael W; Hadzinsky, Tyler D; Pearl, Gregory J

    2017-02-01

    High-performance throwing athletes may be susceptible to the development of neurogenic thoracic outlet syndrome (NTOS). This condition can be career-threatening but the outcomes of treatment for NTOS in elite athletes have not been well characterized. The purpose of this study was to utilize objective performance metrics to evaluate the impact of surgical treatment for NTOS in Major League Baseball (MLB) pitchers. Thirteen established MLB pitchers underwent operations for NTOS between July 2001 and July 2014. For those returning to MLB, traditional and advanced (PitchF/x) MLB performance metrics were acquired from public databases for various time-period scenarios before and after surgery, with comparisons made using paired t-tests, Wilcoxon matched-pair signed-rank tests, and Kruskal-Wallis analysis of variance. Ten of 13 pitchers (77%) achieved a sustained return to MLB, with a mean age of 30.2 ± 1.4 years at the time of surgery and 10.8 ± 1.5 months of postoperative rehabilitation before the return to MLB. Pre- and postoperative career data revealed no significant differences for 15 traditional pitching metrics, including earned run average (ERA), fielding independent pitching, walks plus hits per inning pitched (WHIP), walks per 9 innings, and strikeouts to walk ratio (SO/BB). There were also no significant differences between the 3 years before and the 3 years after surgical treatment. Using PitchF/x data for 72 advanced metrics and 25 different time-period scenarios, the highest number of significant relationships (n = 18) was observed for the 8 weeks before/12 weeks after scenario. In this analysis, 54 (75%) measures were unchanged (including ERA, WHIP, and SO/BB) and 14 (19%) were significantly improved, while only 4 (6%) were significantly decreased (including hard pitch maximal velocity 93.1 ± 1.0 vs. 92.5 ± 0.9 miles/hr, P = 0.047). Six pitchers remained active in MLB during the study period, while the other 4 had retired due to

  17. 16 CFR 1406.5 - Performance and technical data to be furnished to the Commission.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Commission the following performance and technical data related to performance and safety. (a) Written notice... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Performance and technical data to be... TECHNICAL DATA § 1406.5 Performance and technical data to be furnished to the Commission. Manufacturers...

  18. Frontal Representation as a Metric of Model Performance

    NASA Astrophysics Data System (ADS)

    Douglass, E.; Mask, A. C.

    2017-12-01

    Representation of fronts detected by altimetry are used to evaluate the performance of the HYCOM global operational product. Fronts are detected and assessed in daily alongtrack altimetry. Then, modeled sea surface height is interpolated to the locations of the alongtrack observations, and the same frontal detection algorithm is applied to the interpolated model output. The percentage of fronts found in the altimetry and replicated in the model gives a score (0-100) that assesses the model's ability to replicate fronts in the proper location with the proper orientation. Further information can be obtained from determining the number of "extra" fronts found in the model but not in the altimetry, and from assessing the horizontal and vertical dimensions of the front in the model as compared to observations. Finally, the sensitivity of this metric to choices regarding the smoothing of noisy alongtrack altimetry observations, and to the minimum size of fronts being analyzed, is assessed.

  19. CONTACT: An Air Force technical report on military satellite control technology

    NASA Astrophysics Data System (ADS)

    Weakley, Christopher K.

    1993-07-01

    This technical report focuses on Military Satellite Control Technologies and their application to the Air Force Satellite Control Network (AFSCN). This report is a compilation of articles that provide an overview of the AFSCN and the Advanced Technology Program, and discusses relevant technical issues and developments applicable to the AFSCN. Among the topics covered are articles on Future Technology Projections; Future AFSCN Topologies; Modeling of the AFSCN; Wide Area Communications Technology Evolution; Automating AFSCN Resource Scheduling; Health & Status Monitoring at Remote Tracking Stations; Software Metrics and Tools for Measuring AFSCN Software Performance; Human-Computer Interface Working Group; Trusted Systems Workshop; and the University Technical Interaction Program. In addition, Key Technology Area points of contact are listed in the report.

  20. 16 CFR § 1401.5 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Providing performance and technical data to...: REQUIREMENTS TO PROVIDE THE COMMISSION WITH PERFORMANCE AND TECHNICAL DATA; REQUIREMENTS TO NOTIFY CONSUMERS AT POINT OF PURCHASE OF PERFORMANCE AND TECHNICAL DATA § 1401.5 Providing performance and technical data to...

  1. PV System 'Availability' as a Reliability Metric -- Improving Standards, Contract Language and Performance Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey T.; Hill, Roger; Walker, Andy

    The use of the term 'availability' to describe a photovoltaic (PV) system and power plant has been fraught with confusion for many years. A term that is meant to describe equipment operational status is often omitted, misapplied or inaccurately combined with PV performance metrics due to attempts to measure performance and reliability through the lens of traditional power plant language. This paper discusses three areas where current research in standards, contract language and performance modeling is improving the way availability is used with regards to photovoltaic systems and power plants.

  2. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  3. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  4. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  5. On Information Metrics for Spatial Coding.

    PubMed

    Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L

    2018-04-01

    The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. The use of player physical and technical skill match activity profiles to predict position in the Australian Football League draft.

    PubMed

    Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam

    2017-02-01

    This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.

  7. SIMPATIQCO: a server-based software suite which facilitates monitoring the time course of LC-MS performance metrics on Orbitrap instruments.

    PubMed

    Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl

    2012-11-02

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.

  8. SIMPATIQCO: A Server-Based Software Suite Which Facilitates Monitoring the Time Course of LC–MS Performance Metrics on Orbitrap Instruments

    PubMed Central

    2012-01-01

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386

  9. Use of social media in health promotion: purposes, key performance indicators, and evaluation metrics.

    PubMed

    Neiger, Brad L; Thackeray, Rosemary; Van Wagenen, Sarah A; Hanson, Carl L; West, Joshua H; Barnes, Michael D; Fagen, Michael C

    2012-03-01

    Despite the expanding use of social media, little has been published about its appropriate role in health promotion, and even less has been written about evaluation. The purpose of this article is threefold: (a) outline purposes for social media in health promotion, (b) identify potential key performance indicators associated with these purposes, and (c) propose evaluation metrics for social media related to the key performance indicators. Process evaluation is presented in this article as an overarching evaluation strategy for social media.

  10. 16 CFR § 1407.3 - Providing performance and technical data to purchasers by labeling.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Providing performance and technical data to... TECHNICAL DATA BY LABELING § 1407.3 Providing performance and technical data to purchasers by labeling. (a... technical data related to performance and safety to prospective purchasers of such products at the time of...

  11. A neural net-based approach to software metrics

    NASA Technical Reports Server (NTRS)

    Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.

    1992-01-01

    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.

  12. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  13. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Cannon, A. J.

    2015-12-01

    Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the

  14. Performance of the METRIC model in estimating evapotranspiration fluxes over an irrigated field in Saudi Arabia using Landsat-8 images

    NASA Astrophysics Data System (ADS)

    Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.

    2017-12-01

    Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.

  15. Structural texture similarity metrics for image analysis and retrieval.

    PubMed

    Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L

    2013-07-01

    We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.

  16. Is the technical performance of young soccer players influenced by hormonal status, sexual maturity, anthropometric profile, and physical performance?

    PubMed

    Moreira, Alexandre; Massa, Marcelo; Thiengo, Carlos R; Rodrigues Lopes, Rafael Alan; Lima, Marcelo R; Vaeyens, Roel; Barbosa, Wesley P; Aoki, Marcelo S

    2017-12-01

    The aim of this study was to examine the influence of hormonal status, anthropometric profile, sexual maturity level, and physical performance on the technical abilities of 40 young male soccer players during small-sided games (SSGs). Anthropometric profiling, saliva sampling, sexual maturity assessment (Tanner scale), and physical performance tests (Yo-Yo and vertical jumps) were conducted two weeks prior to the SSGs. Salivary testosterone was determined by the enzyme-linked immunosorbent assay method. Technical performance was determined by the frequency of actions during SSGs. Principal component analyses identified four technical actions of importance: total number of passes, effectiveness, goal attempts, and total tackles. A multivariate canonical correlation analysis was then employed to verify the prediction of a multiple dependent variables set (composed of four technical actions) from an independent set of variables, composed of testosterone concentration, stage of pubic hair and genitalia development, vertical jumps and Yo-Yo performance. A moderate-to-large relationship between the technical performance set and the independent set was observed. The canonical correlation was 0.75 with a canonical R 2 of 0.45. The highest structure coefficient in the technical performance set was observed for tackles (0.77), while testosterone presented the highest structure coefficient (0.75) for the variables of the independent set. The current data suggest that the selected independent set of variables might be useful in predicting SSG performance in young soccer players. Coaches should be aware that physical development plays a key role in technical performance to avoid decision-making mistakes during the selection of young players.

  17. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  18. The impact of nontechnical skills on technical performance in surgery: a systematic review.

    PubMed

    Hull, Louise; Arora, Sonal; Aggarwal, Rajesh; Darzi, Ara; Vincent, Charles; Sevdalis, Nick

    2012-02-01

    Failures in nontechnical and teamwork skills frequently lie at the heart of harm and near-misses in the operating room (OR). The purpose of this systematic review was to assess the impact of nontechnical skills on technical performance in surgery. MEDLINE, EMBASE, PsycINFO databases were searched, and 2,041 articles were identified. After limits were applied, 341 articles were retrieved for evaluation. Of these, 28 articles were accepted for this review. Data were extracted from the articles regarding sample population, study design and setting, measures of nontechnical skills and technical performance, study findings, and limitations. Of the 28 articles that met inclusion criteria, 21 articles assessed the impact of surgeons' nontechnical skills on their technical performance. The evidence suggests that receiving feedback and effectively coping with stressful events in the OR has a beneficial impact on certain aspects of technical performance. Conversely, increased levels of fatigue are associated with detriments to surgical skill. One article assessed the impact of anesthesiologists' nontechnical skills on anesthetic technical performance, finding a strong positive correlation between the 2 skill sets. Finally, 6 articles assessed the impact of multiple nontechnical skills of the entire OR team on surgical performance. A strong relationship between teamwork failure and technical error was empirically demonstrated in these studies. Evidence suggests that certain nontechnical aspects of performance can enhance or, if lacking, contribute to deterioration of surgeons' technical performance. The precise extent of this effect remains to be elucidated. Copyright © 2012 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  20. Constrained Metric Learning by Permutation Inducing Isometries.

    PubMed

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  1. Impact of hydrogeological data on measures of uncertainty, site characterization and environmental performance metrics

    NASA Astrophysics Data System (ADS)

    de Barros, Felipe P. J.; Ezzedine, Souheil; Rubin, Yoram

    2012-02-01

    The significance of conditioning predictions of environmental performance metrics (EPMs) on hydrogeological data in heterogeneous porous media is addressed. Conditioning EPMs on available data reduces uncertainty and increases the reliability of model predictions. We present a rational and concise approach to investigate the impact of conditioning EPMs on data as a function of the location of the environmentally sensitive target receptor, data types and spacing between measurements. We illustrate how the concept of comparative information yield curves introduced in de Barros et al. [de Barros FPJ, Rubin Y, Maxwell R. The concept of comparative information yield curves and its application to risk-based site characterization. Water Resour Res 2009;45:W06401. doi:10.1029/2008WR007324] could be used to assess site characterization needs as a function of flow and transport dimensionality and EPMs. For a given EPM, we show how alternative uncertainty reduction metrics yield distinct gains of information from a variety of sampling schemes. Our results show that uncertainty reduction is EPM dependent (e.g., travel times) and does not necessarily indicate uncertainty reduction in an alternative EPM (e.g., human health risk). The results show how the position of the environmental target, flow dimensionality and the choice of the uncertainty reduction metric can be used to assist in field sampling campaigns.

  2. Spatial abilities and technical skills performance in health care: a systematic review.

    PubMed

    Langlois, Jean; Bellemare, Christian; Toulouse, Josée; Wells, George A

    2015-11-01

    The aim of this study was to conduct a systematic review and meta-analysis of the relationship between spatial abilities and technical skills performance in health care in beginners and to compare this relationship with those in intermediate and autonomous learners. Search criteria included 'spatial abilities' and 'technical skills'. Keywords related to these criteria were defined. A literature search was conducted to 20 December, 2013 in Scopus (including MEDLINE) and in several databases on EBSCOhost platforms (CINAHL Plus with Full Text, ERIC, Education Source and PsycINFO). Citations were obtained and reviewed by two independent reviewers. Articles related to retained citations were reviewed and a final list of eligible articles was determined. Articles were assessed for quality using the Scottish Intercollegiate Guidelines Network-50 assessment instrument. Data were extracted from articles in a systematic way. Correlations between spatial abilities test scores and technical skills performance were identified. A series of 8289 citations was obtained. Eighty articles were retained and fully reviewed, yielding 36 eligible articles. The systematic review found a tendency for spatial abilities to be negatively correlated with the duration of technical skills and positively correlated with the quality of technical skills performance in beginners and intermediate learners. Pooled correlations of studies were -0.46 (p = 0.03) and -0.38 (95% confidence interval [CI] -0.53 to -0.21) for duration and 0.33 (95% CI 0.20-0.44) and 0.41 (95% CI 0.26-0.54) for quality of technical skills performance in beginners and intermediate learners, respectively. However, correlations between spatial abilities test scores and technical skills performance were not statistically significant in autonomous learners. Spatial abilities are an important factor to consider in selecting and training individuals in technical skills in health care. © 2015 John Wiley & Sons Ltd.

  3. NASA Technical Standards Program

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    The NASA Technical Standards Program was officially established in 1997 as result of a directive issued by the Administrator. It is responsible for Agency wide technical standards development, adoption (endorsement), and conversion of Center-unique standards for Agency wide use. One major element of the Program is the review of NSA technical standards products and replacement with non-Government Voluntary Consensus Standards in accordance with directions issued by the Office of Management and Budget. As part of the Program's function, it developed a NASA Integrated Technical Standards Initiative that consists of and Agency wide full-text system, standards update notification system, and lessons learned-standards integration system. The Program maintains a 'one stop-shop' Website for technical standards ad related information on aerospace materials, etc. This paper provides information on the development, current status, and plans for the NAS Technical Standards Program along with metrics on the utility of the products provided to both users within the nasa.gov Domain and the Public Domain.

  4. NASA Technical Standards Program

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, WIlliam W.

    2003-01-01

    The NASA Technical Standards Program was officially established in 1997 as result of a directive issued by the Administrator. It is responsible for Agency wide technical standards development, adoption (endorsement), and conversion of Center-unique standards for Agency wide use. One major element of the Program is the review of NSA technical standards products and replacement with non-Government Voluntary Consensus Standards in accordance with directions issued by the Office of Management and Budget. As part of the Program s function, it developed a NASA Integrated Technical Standards Initiative that consists of and Agency wide full-text system, standards update notification system, and lessons learned - standards integration system. The Program maintains a "one stop-shop" Website for technical standards ad related information on aerospace materials, etc. This paper provides information on the development, current status, and plans for the NAS Technical Standards Program along with metrics on the utility of the products provided to both users within the nasa.gov Domain and the Public Domain.

  5. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  6. Enterprise Sustainment Metrics

    DTIC Science & Technology

    2015-06-19

    Ponte Verde Beach: Supply Chain Management Institute. Lambert, D. M., & Pohlen, T. L. (2014). Supply Chain Metrics. In D. M. Lambert, Supply Chain...Partnerships, Performance (pp. 239-256). Ponte Verde Beach: Supply Chain Management Institute Mills, J. S. (1843). A System of Logic, Ratiocinative and

  7. Metric for evaluation of filter efficiency in spectral cameras.

    PubMed

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  8. Sharp metric obstructions for quasi-Einstein metrics

    NASA Astrophysics Data System (ADS)

    Case, Jeffrey S.

    2013-02-01

    Using the tractor calculus to study smooth metric measure spaces, we adapt results of Gover and Nurowski to give sharp metric obstructions to the existence of quasi-Einstein metrics on suitably generic manifolds. We do this by introducing an analogue of the Weyl tractor W to the setting of smooth metric measure spaces. The obstructions we obtain can be realized as tensorial invariants which are polynomial in the Riemann curvature tensor and its divergence. By taking suitable limits of their tensorial forms, we then find obstructions to the existence of static potentials, generalizing to higher dimensions a result of Bartnik and Tod, and to the existence of potentials for gradient Ricci solitons.

  9. Metric Madness

    ERIC Educational Resources Information Center

    Kroon, Cindy D.

    2007-01-01

    Created for a Metric Day activity, Metric Madness is a board game for two to four players. Students review and practice metric vocabulary, measurement, and calculations by playing the game. Playing time is approximately twenty to thirty minutes.

  10. Validation of a Quality Management Metric

    DTIC Science & Technology

    2000-09-01

    quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback

  11. Make It Metric.

    ERIC Educational Resources Information Center

    Camilli, Thomas

    Measurement is perhaps the most frequently used form of mathematics. This book presents activities for learning about the metric system designed for upper intermediate and junior high levels. Discussions include: why metrics, history of metrics, changing to a metric world, teaching tips, and formulas. Activities presented are: metrics all around…

  12. Implicit Contractive Mappings in Modular Metric and Fuzzy Metric Spaces

    PubMed Central

    Hussain, N.; Salimi, P.

    2014-01-01

    The notion of modular metric spaces being a natural generalization of classical modulars over linear spaces like Lebesgue, Orlicz, Musielak-Orlicz, Lorentz, Orlicz-Lorentz, and Calderon-Lozanovskii spaces was recently introduced. In this paper we investigate the existence of fixed points of generalized α-admissible modular contractive mappings in modular metric spaces. As applications, we derive some new fixed point theorems in partially ordered modular metric spaces, Suzuki type fixed point theorems in modular metric spaces and new fixed point theorems for integral contractions. In last section, we develop an important relation between fuzzy metric and modular metric and deduce certain new fixed point results in triangular fuzzy metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25003157

  13. Automated grading of lumbar disc degeneration via supervised distance metric learning

    NASA Astrophysics Data System (ADS)

    He, Xiaoxu; Landis, Mark; Leung, Stephanie; Warrington, James; Shmuilovich, Olga; Li, Shuo

    2017-03-01

    Lumbar disc degeneration (LDD) is a commonly age-associated condition related to low back pain, while its consequences are responsible for over 90% of spine surgical procedures. In clinical practice, grading of LDD by inspecting MRI is a necessary step to make a suitable treatment plan. This step purely relies on physicians manual inspection so that it brings the unbearable tediousness and inefficiency. An automated method for grading of LDD is highly desirable. However, the technical implementation faces a big challenge from class ambiguity, which is typical in medical image classification problems with a large number of classes. This typical challenge is derived from the complexity and diversity of medical images, which lead to a serious class overlapping and brings a great challenge in discriminating different classes. To solve this problem, we proposed an automated grading approach, which is based on supervised distance metric learning to classify the input discs into four class labels (0: normal, 1: slight, 2: marked, 3: severe). By learning distance metrics from labeled instances, an optimal distance metric is modeled and with two attractive advantages: (1) keeps images from the same classes close, and (2) keeps images from different classes far apart. The experiments, performed in 93 subjects, demonstrated the superiority of our method with accuracy 0.9226, sensitivity 0.9655, specificity 0.9083, F-score 0.8615. With our approach, physicians will be free from the tediousness and patients will be provided an effective treatment.

  14. The use of a checklist improves anaesthesiologists' technical and non-technical performance for simulated malignant hyperthermia management.

    PubMed

    Hardy, Jean-Baptiste; Gouin, Antoine; Damm, Cédric; Compère, Vincent; Veber, Benoît; Dureuil, Bertrand

    2018-02-01

    Anaesthesiologists may occasionally manage life-threatening operating room (OR) emergencies. Managing OR emergencies implies real-time analysis of often complicated situations, prompt medical knowledge retrieval, coordinated teamwork and effective decision making in stressful settings. Checklists are recommended to improve performance and reduce the risk of medical errors. This study aimed to assess the usefulness of the French Society of Anaesthesia and Intensive Care's (SFAR) "Malignant Hyperthermia" (MH) checklist on a simulated episode of MH crisis and management thereof by registered anesthesiologists. Twenty-four anaesthesiologists were allocated to 2 groups (checklist and control). Their technical performance in adherence with the SFAR guidelines was assessed by a 30-point score and their non-technical performance was assessed by the Anaesthetists' Non-Technical Skills (ANTS) score. Every task completion was assessed independently. Data are shown as median (first-third quartiles). Anaesthesiologists in the checklist group had higher technical performance scores (24/30 (21.5-25) vs 18/30 (15.5-19.5), P=0.002) and ANTS scores (56.5/60 (47.5-58) vs 48.5/60 (41-50.5), P=0.024). They administered the complete initial dose of dantrolene (2mg/kg) more quickly (15.7 minutes [13.9-18.3] vs 22.4 minutes [18.6-25]) than the control group (P=0.017). However, anaesthesiologists deemed the usability of the checklist to be perfectible. Registered anaesthesiologists' use of the MH checklist during a simulation session widely improved their adherence to guidelines and non-technical skills. This study strongly suggests the benefit of checklist tools for emergency management. Notwithstanding, better awareness and training for anaesthesiologists could further improve the use of this tool. Copyright © 2017 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  15. Factors that influence the non-technical skills performance of scrub nurses: a prospective study.

    PubMed

    Kang, Evelyn; Massey, Debbie; Gillespie, Brigid M

    2015-12-01

    To identify and describe the factors that impact on the performance of scrub nurses' non-technical skills performance during the intra-operative phase of surgery. Non-technical skills have been identified as important precursors to errors in the operating room. However, few studies have investigated factors influencing non-technical skills of scrub nurses. Prospective observational study. Structured observations were performed on a sample of 182 surgical procedures across eight specialities by two trained observers from August 2012-April 2013 at two hospital sites. Participants were purposively selected scrub nurses. Bivariate correlations and a multiple linear regression model were used to identify associations among length of surgery, patients' acuity using the American Society of Anesthesiologists classification system, team familiarity, number of occasions scout nurses leave the operating room, change of scout nurse and the outcome, the non-technical skills performance of scrub nurses. Patient acuity and team familiarity were the strongest predictors of scrub nurses' non-technical skills performance at hospital site A. There were no correlations between the predictors and the performance of scrub nurses at hospital site B. A dedicated surgical team and patient acuity potentially influence the performance of scrub nurses' non-technical skills. Familiarity with team members foster advanced planning, thus minimizing distractions and interruptions that impact on scrub nurses' performance. Development of interventions aimed at improving non-technical skills has the potential to make a substantial difference and enhance patient care. © 2015 John Wiley & Sons Ltd.

  16. The Effect of Technical Performance on Patient Outcomes in Surgery: A Systematic Review.

    PubMed

    Fecso, Andras B; Szasz, Peter; Kerezov, Georgi; Grantcharov, Teodor P

    2017-03-01

    Systematic review of the effect of intraoperative technical performance on patient outcomes. The operating room is a high-stakes, high-risk environment. As a result, the quality of surgical interventions affecting patient outcomes has been the subject of discussion and research for years. MEDLINE, EMBASE, PsycINFO, and Cochrane databases were searched. All surgical specialties were eligible for inclusion. Data were reviewed in regards to the methods by which technical performance was measured, what patient outcomes were assessed, and how intraoperative technical performance affected patient outcomes. Quality of evidence was assessed using the Medical Education Research Study Quality Instrument (MERSQI). Of the 12,758 studies initially identified, 24 articles (7775 total participants) were ultimately included in this review. Seventeen studies assessed the performance of the faculty alone, 2 assessed both the faculty and trainees, 1 assessed trainees alone, and in 4 studies, the level of the operating surgeon was not specified. In 18 studies, a performance assessment tool was used. Patient outcomes were evaluated using intraoperative complications, short-term morbidity, long-term morbidity, short-term mortality, and long-term mortality. The average MERSQI score was 11.67 (range 9.5-14.5). Twenty-one studies demonstrated that superior technical performance was related to improved patient outcomes. The results of this systematic review demonstrated that superior technical performance positively affects patient outcomes. Despite this initial evidence, more robust research is needed to directly assess intraoperative technical performance and its effect on postoperative patient outcomes using meaningful assessment instruments and reliable processes.

  17. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-07-08

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  18. Developing a Common Metric for Evaluating Police Performance in Deadly Force Situations

    DTIC Science & Technology

    2012-08-27

    2005).“Police Inservice Deadly Force Training and Requalification in Washington State.” Law Enforcement Executive Forum, 5(2):67-86. NIJ Metric...OF: EXECUTIVE SUMMARY Background There is a critical lack of scientific evidence about whether deadly force management, accountability and training ...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS training metrics develoment, deadly encounters

  19. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  20. Measuring US Army medical evacuation: Metrics for performance improvement.

    PubMed

    Galvagno, Samuel M; Mabry, Robert L; Maddry, Joseph; Kharod, Chetan U; Walrath, Benjamin D; Powell, Elizabeth; Shackelford, Stacy

    2018-01-01

    The US Army medical evacuation (MEDEVAC) community has maintained a reputation for high levels of success in transporting casualties from the point of injury to definitive care. This work served as a demonstration project to advance a model of quality assurance surveillance and medical direction for prehospital MEDEVAC providers within the Joint Trauma System. A retrospective interrupted time series analysis using prospectively collected data was performed as a process improvement project. Records were reviewed during two distinct periods: 2009 and 2014 to 2015. MEDEVAC records were matched to outcomes data available in the Department of Defense Trauma Registry. Abstracted deidentified data were reviewed for specific outcomes, procedures, and processes of care. Descriptive statistics were applied as appropriate. A total of 1,008 patients were included in this study. Nine quality assurance metrics were assessed. These metrics were: airway management, management of hypoxemia, compliance with a blood transfusion protocol, interventions for hypotensive patients, quality of battlefield analgesia, temperature measurement and interventions, proportion of traumatic brain injury (TBI) patients with hypoxemia and/or hypotension, proportion of traumatic brain injury patients with an appropriate assessment, and proportion of missing data. Overall survival in the subset of patients with outcomes data available in the Department of Defense Trauma Registry was 97.5%. The data analyzed for this study suggest overall high compliance with established tactical combat casualty care guidelines. In the present study, nearly 7% of patients had at least one documented oxygen saturation of less than 90%, and 13% of these patients had no documentation of any intervention for hypoxemia, indicating a need for training focus on airway management for hypoxemia. Advances in battlefield analgesia continued to evolve over the period when data for this study was collected. Given the inherent high

  1. Relevance of motion-related assessment metrics in laparoscopic surgery.

    PubMed

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  2. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  3. Performance of technical trading rules: evidence from Southeast Asian stock markets.

    PubMed

    Tharavanij, Piyapas; Siraprapasiri, Vasan; Rajchamaha, Kittichai

    2015-01-01

    This paper examines the profitability of technical trading rules in the five Southeast Asian stock markets. The data cover a period of 14 years from January 2000 to December 2013. The instruments investigated are five Southeast Asian stock market indices: SET index (Thailand), FTSE Bursa Malaysia KLC index (Malaysia), FTSE Straits Times index (Singapore), JSX Composite index (Indonesia), and PSE composite index (the Philippines). Trading strategies investigated include Relative Strength Index, Stochastic oscillator, Moving Average Convergence-Divergence, Directional Movement Indicator and On Balance Volume. Performances are compared to a simple Buy-and-Hold. Statistical tests are also performed. Our empirical results show a strong performance of technical trading rules in an emerging stock market of Thailand but not in a more mature stock market of Singapore. The technical trading rules also generate statistical significant returns in the Malaysian, Indonesian and the Philippine markets. However, after taking transaction costs into account, most technical trading rules do not generate net returns. This fact suggests different levels of market efficiency among Southeast Asian stock markets. This paper finds three new insights. Firstly, technical indicators does not help much in terms of market timing. Basically, traders cannot expect to buy at a relative low price and sell at a relative high price by just using technical trading rules. Secondly, technical trading rules can be beneficial to individual investors as they help them to counter the behavioral bias called disposition effects which is the tendency to sell winning stocks too soon and holding on to losing stocks too long. Thirdly, even profitable strategies could not reliably predict subsequent market directions. They make money from having a higher average profit from profitable trades than an average loss from unprofitable ones.

  4. A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test

    DTIC Science & Technology

    2012-01-01

    used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss

  5. Software Quality Metrics Enhancements. Volume 1

    DTIC Science & Technology

    1980-04-01

    the mathematical relationships which relate metrics to ratings of the various quality factors) for factors which were not validated previously were...function, provides a mathematical relationship between the metrics and the quality factors. (3) Validation of these normalization functions was performed by...samples, further research is needed before a high degree of confidence can be placed on the mathematical relationships established to date l (3.3.3) 6

  6. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  7. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large

  8. Center to Advance Palliative Care palliative care clinical care and customer satisfaction metrics consensus recommendations.

    PubMed

    Weissman, David E; Morrison, R Sean; Meier, Diane E

    2010-02-01

    Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.

  9. Critical insights for a sustainability framework to address integrated community water services: Technical metrics and approaches.

    PubMed

    Xue, Xiaobo; Schoen, Mary E; Ma, Xin Cissy; Hawkins, Troy R; Ashbolt, Nicholas J; Cashdollar, Jennifer; Garland, Jay

    2015-06-15

    Planning for sustainable community water systems requires a comprehensive understanding and assessment of the integrated source-drinking-wastewater systems over their life-cycles. Although traditional life cycle assessment and similar tools (e.g. footprints and emergy) have been applied to elements of these water services (i.e. water resources, drinking water, stormwater or wastewater treatment alone), we argue for the importance of developing and combining the system-based tools and metrics in order to holistically evaluate the complete water service system based on the concept of integrated resource management. We analyzed the strengths and weaknesses of key system-based tools and metrics, and discuss future directions to identify more sustainable municipal water services. Such efforts may include the need for novel metrics that address system adaptability to future changes and infrastructure robustness. Caution is also necessary when coupling fundamentally different tools so to avoid misunderstanding and consequently misleading decision-making. Published by Elsevier Ltd.

  10. The Vehicle Integrated Performance Analysis Experience: Reconnecting With Technical Integration

    NASA Technical Reports Server (NTRS)

    McGhee, D. S.

    2006-01-01

    Very early in the Space Launch Initiative program, a small team of engineers at MSFC proposed a process for performing system-level assessments of a launch vehicle. Aimed primarily at providing insight and making NASA a smart buyer, the Vehicle Integrated Performance Analysis (VIPA) team was created. The difference between the VIPA effort and previous integration attempts is that VIPA a process using experienced people from various disciplines, which focuses them on a technically integrated assessment. The foundations of VIPA s process are described. The VIPA team also recognized the need to target early detailed analysis toward identifying significant systems issues. This process is driven by the T-model for technical integration. VIPA s approach to performing system-level technical integration is discussed in detail. The VIPA process significantly enhances the development and monitoring of realizable project requirements. VIPA s assessment validates the concept s stated performance, identifies significant issues either with the concept or the requirements, and then reintegrates these issues to determine impacts. This process is discussed along with a description of how it may be integrated into a program s insight and review process. The VIPA process has gained favor with both engineering and project organizations for being responsive and insightful

  11. Technical and tactical skills related to performance levels in tennis: A systematic review.

    PubMed

    Kolman, Nikki S; Kramer, Tamara; Elferink-Gemser, Marije T; Huijgen, Barbara C H; Visscher, Chris

    2018-06-11

    The aim of this systematic review is to provide an overview of outcome measures and instruments identified in the literature for examining technical and tactical skills in tennis related to performance levels. Such instruments can be used to identify talent or the specific skill development training needs of particular players. Searches for this review were conducted using the PubMed, Web of Science, and PsycInfo databases. Out of 733 publications identified through these searches, 40 articles were considered relevant and included in this study. They were divided into three categories: (1) technical skills, (2) tactical skills and (3) integrated technical and tactical skills. There was strong evidence that technical skills (ball velocity and to a lesser extent ball accuracy) and tactical skills (decision making, anticipation, tactical knowledge and visual search strategies) differed among players according to their performance levels. However, integrated measurement of these skills is required, because winning a point largely hinges on a tactical decision to perform a particular stroke (i.e., technical execution). Therefore, future research should focus on examining the relationship between these skills and tennis performance and on the development of integrated methods for measuring these skills.

  12. Assessment of Performance Measures for Security of the Maritime Transportation Network, Port Security Metrics : Proposed Measurement of Deterrence Capability

    DOT National Transportation Integrated Search

    2007-01-03

    This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...

  13. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  14. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  15. Tide or Tsunami? The Impact of Metrics on Scholarly Research

    ERIC Educational Resources Information Center

    Bonnell, Andrew G.

    2016-01-01

    Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…

  16. Thermodynamic metrics and optimal paths.

    PubMed

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  17. Correlation of Admission Metrics with Eventual Success in Mathematics Academic Performance of Freshmen in AMAIUB's Business Curricula

    ERIC Educational Resources Information Center

    Calucag, Lina S.; Talisic, Geraldo C.; Caday, Aileen B.

    2016-01-01

    This is a correlational study research design, which aimed to determine the correlation of admission metrics with eventual success in mathematics academic performance of the admitted 177 first year students of Bachelor of Science in Business Informatics and 59 first year students of Bachelor of Science in International Studies. Using Pearson's…

  18. Performance and Scalability of Discriminative Metrics for Comparative Gene Identification in 12 Drosophila Genomes

    PubMed Central

    Lin, Michael F.; Deoras, Ameya N.; Rasmussen, Matthew D.; Kellis, Manolis

    2008-01-01

    Comparative genomics of multiple related species is a powerful methodology for the discovery of functional genomic elements, and its power should increase with the number of species compared. Here, we use 12 Drosophila genomes to study the power of comparative genomics metrics to distinguish between protein-coding and non-coding regions. First, we study the relative power of different comparative metrics and their relationship to single-species metrics. We find that even relatively simple multi-species metrics robustly outperform advanced single-species metrics, especially for shorter exons (≤240 nt), which are common in animal genomes. Moreover, the two capture largely independent features of protein-coding genes, with different sensitivity/specificity trade-offs, such that their combinations lead to even greater discriminatory power. In addition, we study how discovery power scales with the number and phylogenetic distance of the genomes compared. We find that species at a broad range of distances are comparably effective informants for pairwise comparative gene identification, but that these are surpassed by multi-species comparisons at similar evolutionary divergence. In particular, while pairwise discovery power plateaued at larger distances and never outperformed the most advanced single-species metrics, multi-species comparisons continued to benefit even from the most distant species with no apparent saturation. Last, we find that genes in functional categories typically considered fast-evolving can nonetheless be recovered at very high rates using comparative methods. Our results have implications for comparative genomics analyses in any species, including the human. PMID:18421375

  19. Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.

    2003-01-01

    Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.

  20. NASA metric transition plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  1. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  2. Weather-Corrected Performance Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierauf, T.; Growitz, A.; Kurtz, S.

    Photovoltaic (PV) system performance depends on both the quality of the system and the weather. One simple way to communicate the system performance is to use the performance ratio (PR): the ratio of the electricity generated to the electricity that would have been generated if the plant consistently converted sunlight to electricity at the level expected from the DC nameplate rating. The annual system yield for flat-plate PV systems is estimated by the product of the annual insolation in the plane of the array, the nameplate rating of the system, and the PR, which provides an attractive way to estimatemore » expected annual system yield. Unfortunately, the PR is, again, a function of both the PV system efficiency and the weather. If the PR is measured during the winter or during the summer, substantially different values may be obtained, making this metric insufficient to use as the basis for a performance guarantee when precise confidence intervals are required. This technical report defines a way to modify the PR calculation to neutralize biases that may be introduced by variations in the weather, while still reporting a PR that reflects the annual PR at that site given the project design and the project weather file. This resulting weather-corrected PR gives more consistent results throughout the year, enabling its use as a metric for performance guarantees while still retaining the familiarity this metric brings to the industry and the value of its use in predicting actual annual system yield. A testing protocol is also presented to illustrate the use of this new metric with the intent of providing a reference starting point for contractual content.« less

  3. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  4. Mental Fatigue: Impairment of Technical Performance in Small-Sided Soccer Games.

    PubMed

    Badin, Oliver O; Smith, Mitchell R; Conte, Daniele; Coutts, Aaron J

    2016-11-01

    To assess the effects of mental fatigue on physical and technical performance in small-sided soccer games. Twenty soccer players (age 17.8 ± 1.0 y, height 179 ± 5 cm, body mass 72.4 ± 6.8 kg, playing experience 8.3 ± 1.4 y) from an Australian National Premier League soccer club volunteered to participate in this randomized crossover investigation. Participants played 15-min 5-vs-5 small-sided games (SSGs) without goalkeepers on 2 occasions separated by 1 wk. Before the SSG, 1 team watched a 30-min emotionally neutral documentary (control), while the other performed 30 min of a computer-based Stroop task (mental fatigue). Subjective ratings of mental and physical fatigue were recorded before and after treatment and after the SSG. Motivation was assessed before treatment and SSG; mental effort was assessed after treatment and SSG. Player activity profiles and heart rate (HR) were measured throughout the SSG, whereas ratings of perceived exertion (RPEs) were recorded before the SSG and immediately after each half. Video recordings of the SSG allowed for notational analysis of technical variables. Subjective ratings of mental fatigue and effort were higher after the Stroop task, whereas motivation for the upcoming SSG was similar between conditions. HR during the SSG was possibly higher in the control condition, whereas RPE was likely higher in the mental-fatigue condition. Mental fatigue had an unclear effect on most physical-performance variables but impaired most technical-performance variables. Mental fatigue impairs technical but not physical performance in small-sided soccer games.

  5. The LSST metrics analysis framework (MAF)

    NASA Astrophysics Data System (ADS)

    Jones, R. L.; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Željko; Krughoff, K. S.; Petry, Catherine; Ridgway, Stephen T.

    2014-07-01

    We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.

  6. Long Term Performance Metrics of the GD SDR on the SCaN Testbed: The First Year on the ISS

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Wilson, Molly C.

    2014-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCaN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SCaN Testbed was installed on the ISS in August of 2012. After installation, the initial checkout and commissioning phases were completed and experimental operations commenced. One goal of the SCaN Testbed is to collect long term performance metrics for SDRs operating in space in order to demonstrate long term reliability. These metrics include the time the SDR powered on, the time the power amplifier (PA) is powered on, temperature trends, error detection and correction (EDAC) behavior, and waveform operational usage time. This paper describes the performance of the GD SDR over the first year of operations on the ISS.

  7. Technical and physical determinants of soccer match-play performance in elite youth soccer players.

    PubMed

    Rowat, Owain; Fenner, Jonathan; Unnithan, Viswanath

    2017-04-01

    The aim of this study was to evaluate whether physical performance characteristics could be a better predictor than technical skills in determining the technical level of county soccer players in a match situation. With institutional ethics approval, 25 male youth soccer players aged 16-18.5 years from a professional soccer academy in South East Asia were selected and height and body mass were recorded. Players were tested for sexual maturity (pubertal development scale [PDS] self-assessment), aerobic capacity (yo-yo intermittent recovery test level 1 [YYIR1]), repeated sprint ability (7 x 35 m sprints) acceleration (15 m sprint) and four soccer skills tests (dribble with pass, dribbling speed, passing and shooting accuracy). Players' technical ability during match play was assessed in small-sided games of soccer (5 v 5) using a novel game technical scoring chart (scoring chart completed by coaches to assess technical performance in a match situation) developed from criteria (e.g., first touch, dribbling and two footedness) used by youth soccer coaches for talent identification. A Spearman's rank correlation showed the YYIR1 test and 15 m sprint test were limited in predicting technical match performance (r=0.03, P=0.88, r=-0.23, P=0.32 respectively). A Pearson product moment correlation showed that the repeated sprint test was also limited in predicting technical match performance (r=-0.34, P=0.14). A dribbling skill with a pass was found to be the best determinant of a player's technical ability in a match (r=-0.57, P=0.00). Talent identification and selection programs in Asian youth soccer should include a dribbling skill performed with a pass.

  8. Analytical performance evaluation of a high-volume hematology laboratory utilizing sigma metrics as standard of excellence.

    PubMed

    Shaikh, M S; Moiz, B

    2016-04-01

    Around two-thirds of important clinical decisions about the management of patients are based on laboratory test results. Clinical laboratories are required to adopt quality control (QC) measures to ensure provision of accurate and precise results. Six sigma is a statistical tool, which provides opportunity to assess performance at the highest level of excellence. The purpose of this study was to assess performance of our hematological parameters on sigma scale in order to identify gaps and hence areas of improvement in patient care. Twelve analytes included in the study were hemoglobin (Hb), hematocrit (Hct), red blood cell count (RBC), mean corpuscular volume (MCV), red cell distribution width (RDW), total leukocyte count (TLC) with percentages of neutrophils (Neutr%) and lymphocytes (Lymph %), platelet count (Plt), mean platelet volume (MPV), prothrombin time (PT), and fibrinogen (Fbg). Internal quality control data and external quality assurance survey results were utilized for the calculation of sigma metrics for each analyte. Acceptable sigma value of ≥3 was obtained for the majority of the analytes included in the analysis. MCV, Plt, and Fbg achieved value of <3 for level 1 (low abnormal) control. PT performed poorly on both level 1 and 2 controls with sigma value of <3. Despite acceptable conventional QC tools, application of sigma metrics can identify analytical deficits and hence prospects for the improvement in clinical laboratories. © 2016 John Wiley & Sons Ltd.

  9. Technical Performance as a Predictor of Clinical Outcomes in Laparoscopic Gastric Cancer Surgery.

    PubMed

    Fecso, Andras B; Bhatti, Junaid A; Stotland, Peter K; Quereshy, Fayez A; Grantcharov, Teodor P

    2018-03-23

    The purpose of this study was to evaluate the relationship between technical performance and patient outcomes in laparoscopic gastric cancer surgery. Laparoscopic gastrectomy for cancer is an advanced procedure with high rate of postoperative morbidity and mortality. Many variables including patient, disease, and perioperative management factors have been shown to impact postoperative outcomes; however, the role of surgical performance is insufficiently investigated. A retrospective review was performed for all patients who had undergone laparoscopic gastrectomy for cancer at 3 teaching institutions between 2009 and 2015. Patients with available, unedited video-recording of their procedure were included in the study. Video files were rated for technical performance, using Objective Structured Assessments of Technical Skills (OSATS) and Generic Error Rating Tool instruments. The main outcome variable was major short-term complications. The effect of technical performance on patient outcomes was assessed using logistic regression analysis with backward selection strategy. Sixty-one patients with available video recordings were included in the study. The overall complication rate was 29.5%. The mean Charlson comorbidity index, type of procedure, and the global OSATS score were included in the final predictive model. Lower performance score (OSATS ≤29) remained an independent predictor for major short-term outcomes (odds ratio 6.49), while adjusting for comorbidities and type of procedure. Intraoperative technical performance predicts major short-term outcomes in laparoscopic gastrectomy for cancer. Ongoing assessment and enhancement of surgical skills using modern, evidence-based strategies might improve short-term patient outcomes. Future work should focus on developing and studying the effectiveness of such interventions in laparoscopic gastric cancer surgery.

  10. Synchronization of multi-agent systems with metric-topological interactions.

    PubMed

    Wang, Lin; Chen, Guanrong

    2016-09-01

    A hybrid multi-agent systems model integrating the advantages of both metric interaction and topological interaction rules, called the metric-topological model, is developed. This model describes planar motions of mobile agents, where each agent can interact with all the agents within a circle of a constant radius, and can furthermore interact with some distant agents to reach a pre-assigned number of neighbors, if needed. Some sufficient conditions imposed only on system parameters and agent initial states are presented, which ensure achieving synchronization of the whole group of agents. It reveals the intrinsic relationships among the interaction range, the speed, the initial heading, and the density of the group. Moreover, robustness against variations of interaction range, density, and speed are investigated by comparing the motion patterns and performances of the hybrid metric-topological interaction model with the conventional metric-only and topological-only interaction models. Practically in all cases, the hybrid metric-topological interaction model has the best performance in the sense of achieving highest frequency of synchronization, fastest convergent rate, and smallest heading difference.

  11. Limitations of using same-hospital readmission metrics.

    PubMed

    Davies, Sheryl M; Saynina, Olga; McDonald, Kathryn M; Baker, Laurence C

    2013-12-01

    To quantify the limitations associated with restricting readmission metrics to same-hospital only readmission. Using 2000-2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file, we identified the proportion of 7-, 15- and 30-day readmissions occurring to the same hospital as the initial admission using All-cause Readmission (ACR) and 3M Corporation Potentially Preventable Readmissions (PPR) Metric. We examined the correlation between performance using same and different hospital readmission, the percent of hospitals remaining in the extreme deciles when utilizing different metrics, agreement in identifying outliers and differences in longitudinal performance. Using logistic regression, we examined the factors associated with admission to the same hospital. 68% of 30-day ACR and 70% of 30-day PPR occurred to the same hospital. Abdominopelvic procedures had higher proportions of same-hospital readmissions (87.4-88.9%), cardiac surgery had lower (72.5-74.9%) and medical DRGs were lower than surgical DRGs (67.1 vs. 71.1%). Correlation and agreement in identifying high- and low-performing hospitals was weak to moderate, except for 7-day metrics where agreement was stronger (r = 0.23-0.80, Kappa = 0.38-0.76). Agreement for within-hospital significant (P < 0.05) longitudinal change was weak (Kappa = 0.05-0.11). Beyond all patient refined-diagnostic related groups, payer was the most predictive factor with Medicare and MediCal patients having a higher likelihood of same-hospital readmission (OR 1.62, 1.73). Same-hospital readmission metrics are limited for all tested applications. Caution should be used when conducting research, quality improvement or comparative applications that do not account for readmissions to other hospitals.

  12. A Validation of Object-Oriented Design Metrics

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.

    1995-01-01

    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

  13. Economic Metrics for Commercial Reusable Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Shaw, Eric J.; Hamaker, Joseph (Technical Monitor)

    2000-01-01

    The success of any effort depends upon the effective initial definition of its purpose, in terms of the needs to be satisfied and the goals to be fulfilled. If the desired product is "A System" that is well-characterized, these high-level need and goal statements can be transformed into system requirements by traditional systems engineering techniques. The satisfaction of well-designed requirements can be tracked by fairly straightforward cost, schedule, and technical performance metrics. Unfortunately, some types of efforts, including those that NASA terms "Programs," tend to resist application of traditional systems engineering practices. In the NASA hierarchy of efforts, a "Program" is often an ongoing effort with broad, high-level goals and objectives. A NASA "project" is a finite effort, in terms of budget and schedule, that usually produces or involves one System. Programs usually contain more than one project and thus more than one System. Special care must be taken in the formulation of NASA Programs and their projects, to ensure that lower-level project requirements are traceable to top-level Program goals, feasible with the given cost and schedule constraints, and measurable against top-level goals. NASA Programs and projects are tasked to identify the advancement of technology as an explicit goal, which introduces more complicating factors. The justification for funding of technology development may be based on the technology's applicability to more than one System, Systems outside that Program or even external to NASA. Application of systems engineering to broad-based technology development, leading to effective measurement of the benefits, can be valid, but it requires that potential beneficiary Systems be organized into a hierarchical structure, creating a "system of Systems." In addition, these Systems evolve with the successful application of the technology, which creates the necessity for evolution of the benefit metrics to reflect the changing

  14. An Evaluation of the IntelliMetric[SM] Essay Scoring System

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine

    2006-01-01

    This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…

  15. A computational imaging target specific detectivity metric

    NASA Astrophysics Data System (ADS)

    Preece, Bradley L.; Nehmetallah, George

    2017-05-01

    Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.

  16. Developing a Security Metrics Scorecard for Healthcare Organizations.

    PubMed

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.

  17. Early decision framework for integrating sustainable risk management for complex remediation sites: Drivers, barriers, and performance metrics.

    PubMed

    Harclerode, Melissa A; Macbeth, Tamzen W; Miller, Michael E; Gurr, Christopher J; Myers, Teri S

    2016-12-15

    As the environmental remediation industry matures, remaining sites often have significant underlying technical challenges and financial constraints. More often than not, significant remediation efforts at these "complex" sites have not achieved stringent, promulgated cleanup goals. Decisions then have to be made about whether and how to commit additional resources towards achieving those goals, which are often not achievable nor required to protect receptors. Guidance on cleanup approaches focused on evaluating and managing site-specific conditions and risks, rather than uniformly meeting contaminant cleanup criteria in all media, is available to aid in this decision. Although these risk-based cleanup approaches, such as alternative endpoints and adaptive management strategies, have been developed, they are under-utilized due to environmental, socio-economic, and risk perception barriers. Also, these approaches are usually implemented late in the project life cycle after unsuccessful remedial attempts to achieve stringent cleanup criteria. In this article, we address these barriers by developing an early decision framework to identify if site characteristics support sustainable risk management, and develop performance metrics and tools to evaluate and implement successful risk-based cleanup approaches. In addition, we address uncertainty and risk perception challenges by aligning risk-based cleanup approaches with the concepts of risk management and sustainable remediation. This approach was developed in the context of lessons learned from implementing remediation at complex sites, but as a framework can, and should, be applied to all sites undergoing remediation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.

    PubMed

    Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K

    2018-06-15

    The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.

  19. It's A Metric World.

    ERIC Educational Resources Information Center

    Alabama State Dept. of Education, Montgomery. Div. of Instructional Services.

    Topics covered in the first part of this document include eight advantages of the metric system; a summary of metric instruction; the International System of Units (SI) style and usage; metric decimal tables; the metric system; and conversion tables. An alphabetized list of organizations which market metric materials for educators is provided with…

  20. Colonoscopy Quality: Metrics and Implementation

    PubMed Central

    Calderwood, Audrey H.; Jacobson, Brian C.

    2013-01-01

    Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862

  1. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  2. Metric learning for automatic sleep stage classification.

    PubMed

    Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung

    2013-01-01

    We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.

  3. Intravascular US-Guided Portal Vein Access: Improved Procedural Metrics during TIPS Creation.

    PubMed

    Gipson, Matthew G; Smith, Mitchell T; Durham, Janette D; Brown, Anthony; Johnson, Thor; Ray, Charles E; Gupta, Rajan K; Kondo, Kimi L; Rochon, Paul J; Ryu, Robert K

    2016-08-01

    To evaluate transjugular intrahepatic portosystemic shunt (TIPS) outcomes and procedure metrics with the use of three different image guidance techniques for portal vein (PV) access during TIPS creation. A retrospective review of consecutive patients who underwent TIPS procedures for a range of indications during a 28-month study period identified a population of 68 patients. This was stratified by PV access techniques: fluoroscopic guidance with or without portography (n = 26), PV marker wire guidance (n = 18), or intravascular ultrasound (US) guidance (n = 24). Procedural outcomes and procedural metrics, including radiation exposure, contrast agent volume used, procedure duration, and PV access time, were analyzed. No differences in demographic or procedural characteristics were found among the three groups. Technical success, technical success of the primary planned approach, hemodynamic success, portosystemic gradient, and procedure-related complications were not significantly different among groups. Fluoroscopy time (P = .003), air kerma (P = .01), contrast agent volume (P = .003), and total procedural time (P = .02) were reduced with intravascular US guidance compared with fluoroscopic guidance. Fluoroscopy time (P = .01) and contrast agent volume (P = .02) were reduced with intravascular US guidance compared with marker wire guidance. Intravascular US guidance of PV access during TIPS creation not only facilitates successful TIPS creation in patients with challenging anatomy, as suggested by previous investigations, but also reduces important procedure metrics including radiation exposure, contrast agent volume, and overall procedure duration compared with fluoroscopically guided TIPS creation. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  4. Metrics for the NASA Airspace Systems Program

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Neitzke, Kurt W.

    2009-01-01

    This document defines an initial set of metrics for use by the NASA Airspace Systems Program (ASP). ASP consists of the NextGen-Airspace Project and the NextGen-Airportal Project. The work in each project is organized along multiple, discipline-level Research Focus Areas (RFAs). Each RFA is developing future concept elements in support of the Next Generation Air Transportation System (NextGen), as defined by the Joint Planning and Development Office (JPDO). In addition, a single, system-level RFA is responsible for integrating concept elements across RFAs in both projects and for assessing system-wide benefits. The primary purpose of this document is to define a common set of metrics for measuring National Airspace System (NAS) performance before and after the introduction of ASP-developed concepts for NextGen as the system handles increasing traffic. The metrics are directly traceable to NextGen goals and objectives as defined by the JPDO and hence will be used to measure the progress of ASP research toward reaching those goals. The scope of this document is focused on defining a common set of metrics for measuring NAS capacity, efficiency, robustness, and safety at the system-level and at the RFA-level. Use of common metrics will focus ASP research toward achieving system-level performance goals and objectives and enable the discipline-level RFAs to evaluate the impact of their concepts at the system level.

  5. The Round Table on Computer Performance Metrics for Export Control: Discussions and Results

    DTIC Science & Technology

    1997-12-01

    eligibility, use the CTP parameter to the exclusion of other technical parameters for computers classified under ECCN 4A003.a, .b and .c, except of...parameters specified as Missile Technology (MT) concerns or 4A003.e (equipment performing analog-to-digital conversions exceeding the limits in ECCN

  6. The impact of fatigue on the non-technical skills performance of critical care air ambulance clinicians.

    PubMed

    Myers, J A; Powell, D M C; Aldington, S; Sim, D; Psirides, A; Hathaway, K; Haney, M F

    2017-11-01

    The relationship between fatigue-related risk and impaired clinical performance is not entirely clear. Non-technical factors represent an important component of clinical performance and may be sensitive to the effects of fatigue. The hypothesis was that the sum score of overall non-technical performance is degraded by fatigue. Nineteen physicians undertook two different simulated air ambulance missions, once when rested, and once when fatigued (randomised crossover design). Trained assessors blinded to participants' fatigue status performed detailed structured assessments based on expected behaviours in four non-technical skills domains: teamwork, situational awareness, task management, and decision making. Participants also provided self-ratings of their performance. The primary endpoint was the sum score of overall non-technical performance. The main finding, the overall non-technical skills performance rating of the clinicians, was better in rested than fatigued states (mean difference with 95% CI, 2.8 [2.2-3.4]). The findings remained consistent across individual non-technical skills domains; also when controlling for an order effect and examining the impact of a number of possible covariates. There was no difference in self-ratings of clinical performance between rested and fatigued states. Non-technical performance of critical care air transfer clinicians is degraded when they are fatigued. Fatigued clinicians may fail to recognise the degree to which their performance is compromised. These findings represent risk to clinical care quality and patient safety in the dynamic and isolated environment of air ambulance transfer. © 2017 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  8. Metrics for assessing the performance of morphodynamic models of braided rivers at event and reach scales

    NASA Astrophysics Data System (ADS)

    Williams, Richard; Measures, Richard; Hicks, Murray; Brasington, James

    2017-04-01

    Advances in geomatics technologies have transformed the monitoring of reach-scale (100-101 km) river morphodynamics. Hyperscale Digital Elevation Models (DEMs) can now be acquired at temporal intervals that are commensurate with the frequencies of high-flow events that force morphological change. The low vertical errors associated with such DEMs enable DEMs of Difference (DoDs) to be generated to quantify patterns of erosion and deposition, and derive sediment budgets using the morphological approach. In parallel with reach-scale observational advances, high-resolution, two-dimensional, physics-based numerical morphodynamic models are now computationally feasible for unsteady, reach-scale simulations. In light of this observational and predictive progress, there is a need to identify appropriate metrics that can be extracted from DEMs and DoDs to assess model performance. Nowhere is this more pertinent than in braided river environments, where numerous mobile channels that intertwine around mid-channel bars result in complex patterns of erosion and deposition, thus making model assessment particularly challenging. This paper identifies and evaluates a range of morphological and morphological-change metrics that can be used to assess predictions of braided river morphodynamics at the timescale of single storm events. A depth-averaged, mixed-grainsize Delft3D morphodynamic model was used to simulate morphological change during four discrete high-flow events, ranging from 91 to 403 m3s-1, along a 2.5 x 0.7 km reach of the braided, gravel-bed Rees River, New Zealand. Pre- and post-event topographic surveys, using a fusion of Terrestrial Laser Scanning and optical-empirical bathymetric mapping, were used to produce 0.5 m resolution DEMs and DoDs. The pre- and post-event DEMs for a moderate (227m3s-1) high-flow event were used to calibrate the model. DEMs and DoDs from the other three high-flow events were used for model assessment using two approaches. First

  9. A Survey of Health Management User Objectives Related to Diagnostic and Prognostic Metrics

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Kurtoglu, Tolga; Poll, Scott D.

    2010-01-01

    One of the most prominent technical challenges to effective deployment of health management systems is the vast difference in user objectives with respect to engineering development. In this paper, a detailed survey on the objectives of different users of health management systems is presented. These user objectives are then mapped to the metrics typically encountered in the development and testing of two main systems health management functions: diagnosis and prognosis. Using this mapping, the gaps between user goals and the metrics associated with diagnostics and prognostics are identified and presented with a collection of lessons learned from previous studies that include both industrial and military aerospace applications.

  10. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  11. Building structural similarity database for metric learning

    NASA Astrophysics Data System (ADS)

    Jin, Guoxin; Pappas, Thrasyvoulos N.

    2015-03-01

    We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.

  12. Energy retrofit of an office building by substitution of the generation system: performance evaluation via dynamic simulation versus current technical standards

    NASA Astrophysics Data System (ADS)

    Testi, D.; Schito, E.; Menchetti, E.; Grassi, W.

    2014-11-01

    Constructions built in Italy before 1945 (about 30% of the total built stock) feature low energy efficiency. Retrofit actions in this field can lead to valuable energetic and economic savings. In this work, we ran a dynamic simulation of a historical building of the University of Pisa during the heating season. We firstly evaluated the energy requirements of the building and the performance of the existing natural gas boiler, validated with past billings of natural gas. We also verified the energetic savings obtainable by the substitution of the boiler with an air-to-water electrically-driven modulating heat pump, simulated through a cycle-based model, evaluating the main economic metrics. The cycle-based model of the heat pump, validated with manufacturers' data available only at specified temperature and load conditions, can provide more accurate results than the simplified models adopted by current technical standards, thus increasing the effectiveness of energy audits.

  13. Performance Evaluation of Indian Technical Institutions Using PROMETHEE-GAIA Approach

    ERIC Educational Resources Information Center

    Ranjan, Rajeev; Chakraborty, Shankar

    2015-01-01

    It has now become an important issue to evaluate the performance of technical institutions to develop better research and enrich the existing teaching processes. The results of such performance appraisal would serve as a reference point for decisions to choose a particular institution, hire manpower, and provide financial support for the…

  14. Standardised metrics for global surgical surveillance.

    PubMed

    Weiser, Thomas G; Makary, Martin A; Haynes, Alex B; Dziekan, Gerald; Berry, William R; Gawande, Atul A

    2009-09-26

    Public health surveillance relies on standardised metrics to evaluate disease burden and health system performance. Such metrics have not been developed for surgical services despite increasing volume, substantial cost, and high rates of death and disability associated with surgery. The Safe Surgery Saves Lives initiative of WHO's Patient Safety Programme has developed standardised public health metrics for surgical care that are applicable worldwide. We assembled an international panel of experts to develop and define metrics for measuring the magnitude and effect of surgical care in a population, while taking into account economic feasibility and practicability. This panel recommended six measures for assessing surgical services at a national level: number of operating rooms, number of operations, number of accredited surgeons, number of accredited anaesthesia professionals, day-of-surgery death ratio, and postoperative in-hospital death ratio. We assessed the feasibility of gathering such statistics at eight diverse hospitals in eight countries and incorporated them into the WHO Guidelines for Safe Surgery, in which methods for data collection, analysis, and reporting are outlined.

  15. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  16. Vehicle Integrated Prognostic Reasoner (VIPR) Metric Report

    NASA Technical Reports Server (NTRS)

    Cornhill, Dennis; Bharadwaj, Raj; Mylaraswamy, Dinkar

    2013-01-01

    This document outlines a set of metrics for evaluating the diagnostic and prognostic schemes developed for the Vehicle Integrated Prognostic Reasoner (VIPR), a system-level reasoner that encompasses the multiple levels of large, complex systems such as those for aircraft and spacecraft. VIPR health managers are organized hierarchically and operate together to derive diagnostic and prognostic inferences from symptoms and conditions reported by a set of diagnostic and prognostic monitors. For layered reasoners such as VIPR, the overall performance cannot be evaluated by metrics solely directed toward timely detection and accuracy of estimation of the faults in individual components. Among other factors, overall vehicle reasoner performance is governed by the effectiveness of the communication schemes between monitors and reasoners in the architecture, and the ability to propagate and fuse relevant information to make accurate, consistent, and timely predictions at different levels of the reasoner hierarchy. We outline an extended set of diagnostic and prognostics metrics that can be broadly categorized as evaluation measures for diagnostic coverage, prognostic coverage, accuracy of inferences, latency in making inferences, computational cost, and sensitivity to different fault and degradation conditions. We report metrics from Monte Carlo experiments using two variations of an aircraft reference model that supported both flat and hierarchical reasoning.

  17. Metric Conversion

    Atmospheric Science Data Center

    2013-03-12

    Metric Weights and Measures The metric system is based on 10s.  For example, 10 millimeters = 1 centimeter, 10 ... Special Publications: NIST Guide to SI Units: Conversion Factors NIST Guide to SI Units: Conversion Factors listed ...

  18. Do PICU patients meet technical criteria for performing indirect calorimetry?

    PubMed

    Beggs, Megan R; Garcia Guerra, Gonzalo; Larsen, Bodil M K

    2016-10-01

    Indirect calorimetry (IC) is considered gold standard for assessing energy needs of critically ill children as predictive equations and clinical status indicators are often unreliable. Accurate assessment of energy requirements in this vulnerable population is essential given the high risk of over or underfeeding and the consequences thereof. The proportion of patients and patient days in pediatric intensive care (PICU) for which energy expenditure (EE) can be measured using IC is currently unknown. In the current study, we aimed to quantify the daily proportion of consecutive PICU patients who met technical criteria to perform indirect calorimetry and describe the technical contraindications when criteria were not met. Prospective, observational, single-centre study conducted in a cardiac and general PICU. All consecutive patients admitted for at least 96 h were included in the study. Variables collected for each patient included age at admission, admission diagnosis, and if technical criteria for indirect calorimetry were met. Technical criteria variables were collected within the same 2 h each morning and include: provision of supplemental oxygen, ventilator settings, endotracheal tube (ETT) leak, diagnosis of chest tube air leak, provision of external gas support (i.e. nitric oxide), and provision of extracorporeal membrane oxygenation (ECMO). 288 patients were included for a total of 3590 patient days between June 2014 and February 2015. The main reasons for admission were: surgery (cardiac and non-cardiac), respiratory distress, trauma, oncology and medicine/other. The median (interquartile range) patient age was 0.7 (0.3-4.6) years. The median length of PICU stay was 7 (5-14) days. Only 34% (95% CI, 32.4-35.5%) of patient days met technical criteria for IC. For patients less than 6 months of age, technical criteria were met on significantly fewer patient days (29%, p < 0.01). Moreover, 27% of patients did not meet technical criteria for IC on any day

  19. Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.

    2013-01-01

    NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.

  20. Experimental constraints on metric and non-metric theories of gravity

    NASA Technical Reports Server (NTRS)

    Will, Clifford M.

    1989-01-01

    Experimental constraints on metric and non-metric theories of gravitation are reviewed. Tests of the Einstein Equivalence Principle indicate that only metric theories of gravity are likely to be viable. Solar system experiments constrain the parameters of the weak field, post-Newtonian limit to be close to the values predicted by general relativity. Future space experiments will provide further constraints on post-Newtonian gravity.

  1. Using Publication Metrics to Highlight Academic Productivity and Research Impact

    PubMed Central

    Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.

    2016-01-01

    This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141

  2. Properties of C-metric spaces

    NASA Astrophysics Data System (ADS)

    Croitoru, Anca; Apreutesei, Gabriela; Mastorakis, Nikos E.

    2017-09-01

    The subject of this paper belongs to the theory of approximate metrics [23]. An approximate metric on X is a real application defined on X × X that satisfies only a part of the metric axioms. In a recent paper [23], we introduced a new type of approximate metric, named C-metric, that is an application which satisfies only two metric axioms: symmetry and triangular inequality. The remarkable fact in a C-metric space is that a topological structure induced by the C-metric can be defined. The innovative idea of this paper is that we obtain some convergence properties of a C-metric space in the absence of a metric. In this paper we investigate C-metric spaces. The paper is divided into four sections. Section 1 is for Introduction. In Section 2 we recall some concepts and preliminary results. In Section 3 we present some properties of C-metric spaces, such as convergence properties, a canonical decomposition and a C-fixed point theorem. Finally, in Section 4 some conclusions are highlighted.

  3. Energy-Based Metrics for Arthroscopic Skills Assessment.

    PubMed

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  4. About Using the Metric System.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet contains a brief introduction to the use of the metric system. Topics covered include: (1) what is the metric system; (2) how to think metric; (3) some advantages of the metric system; (4) basics of the metric system; (5) how to measure length, area, volume, mass and temperature the metric way; (6) some simple calculations using…

  5. UMAMI: A Recipe for Generating Meaningful Metrics through Holistic I/O Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockwood, Glenn K.; Yoo, Wucherl; Byna, Suren

    I/O efficiency is essential to productivity in scientific computing, especially as many scientific domains become more data-intensive. Many characterization tools have been used to elucidate specific aspects of parallel I/O performance, but analyzing components of complex I/O subsystems in isolation fails to provide insight into critical questions: how do the I/O components interact, what are reasonable expectations for application performance, and what are the underlying causes of I/O performance problems? To address these questions while capitalizing on existing component-level characterization tools, we propose an approach that combines on-demand, modular synthesis of I/O characterization data into a unified monitoring and metricsmore » interface (UMAMI) to provide a normalized, holistic view of I/O behavior. We evaluate the feasibility of this approach by applying it to a month-long benchmarking study on two distinct largescale computing platforms. We present three case studies that highlight the importance of analyzing application I/O performance in context with both contemporaneous and historical component metrics, and we provide new insights into the factors affecting I/O performance. By demonstrating the generality of our approach, we lay the groundwork for a production-grade framework for holistic I/O analysis.« less

  6. Metrics for linear kinematic features in sea ice

    NASA Astrophysics Data System (ADS)

    Levy, G.; Coon, M.; Sulsky, D.

    2006-12-01

    The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.

  7. OrbView-3 Technical Performance Evaluation 2005: Modulation Transfer Function

    NASA Technical Reports Server (NTRS)

    Cole, Aaron

    2007-01-01

    The Technical performance evaluation of OrbView-3 using the Modulation Transfer Function (MTF) is presented. The contents include: 1) MTF Results and Methodology; 2) Radiometric Calibration Methodology; and 3) Relative Radiometric Assessment Results

  8. Payload Fuel Energy Efficiency as a Metric for Aviation Environmental Performance

    DOT National Transportation Integrated Search

    2008-09-14

    Aviation provides productivity in the form of transporting passengers and cargo long distances in a shorter period of time than is available via land or sea. Given the recent rise in fuel prices and environmental concerns, a consistent metric is need...

  9. Intra-operative disruptions, surgeon's mental workload, and technical performance in a full-scale simulated procedure.

    PubMed

    Weigl, Matthias; Stefan, Philipp; Abhari, Kamyar; Wucherer, Patrick; Fallavollita, Pascal; Lazarovici, Marc; Weidert, Simon; Euler, Ekkehard; Catchpole, Ken

    2016-02-01

    Surgical flow disruptions occur frequently and jeopardize perioperative care and surgical performance. So far, insights into subjective and cognitive implications of intra-operative disruptions for surgeons and inherent consequences for performance are inconsistent. This study aimed to investigate the effect of surgical flow disruption on surgeon's intra-operative workload and technical performance. In a full-scale OR simulation, 19 surgeons were randomly allocated to either of the two disruption scenarios (telephone call vs. patient discomfort). Using a mixed virtual reality simulator with a computerized, high-fidelity mannequin, all surgeons were trained in performing a vertebroplasty procedure and subsequently performed such a procedure under experimental conditions. Standardized measures on subjective workload and technical performance (trocar positioning deviation from expert-defined standard, number, and duration of X-ray acquisitions) were collected. Intra-operative workload during simulated disruption scenarios was significantly higher compared to training sessions (p < .01). Surgeons in the telephone call scenario experienced significantly more distraction compared to their colleagues in the patient discomfort scenario (p < .05). However, workload tended to be increased in surgeons who coped with distractions due to patient discomfort. Technical performance was not significantly different between both disruption scenarios. We found a significant association between surgeons' intra-operative workload and technical performance such that surgeons with increased mental workload tended to perform worse (β = .55, p = .04). Surgical flow disruptions affect surgeons' intra-operative workload. Increased mental workload was associated with inferior technical performance. Our simulation-based findings emphasize the need to establish smooth surgical flow which is characterized by a low level of process deviations and disruptions.

  10. Tracking occupational hearing loss across global industries: A comparative analysis of metrics

    PubMed Central

    Rabinowitz, Peter M.; Galusha, Deron; McTague, Michael F.; Slade, Martin D.; Wesdock, James C.; Dixon-Ernst, Christine

    2013-01-01

    Occupational hearing loss is one of the most prevalent occupational conditions; yet, there is no acknowledged international metric to allow comparisons of risk between different industries and regions. In order to make recommendations for an international standard of occupational hearing loss, members of an international industry group (the International Aluminium Association) submitted details of different hearing loss metrics currently in use by members. We compared the performance of these metrics using an audiometric data set for over 6000 individuals working in 10 locations of one member company. We calculated rates for each metric at each location from 2002 to 2006. For comparison, we calculated the difference of observed–expected (for age) binaural high frequency hearing loss (in dB/year) for each location over the same time period. We performed linear regression to determine the correlation between each metric and the observed–expected rate of hearing loss. The different metrics produced discrepant results, with annual rates ranging from 0.0% for a less-sensitive metric to more than 10% for a highly sensitive metric. At least two metrics, a 10 dB age-corrected threshold shift from baseline and a 15 dB nonage-corrected shift metric, correlated well with the difference of observed–expected high-frequency hearing loss. This study suggests that it is feasible to develop an international standard for tracking occupational hearing loss in industrial working populations. PMID:22387709

  11. Validation metrics for turbulent plasma transport

    DOE PAGES

    Holland, C.

    2016-06-22

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  12. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C.

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  13. Sigma metric analysis for performance of creatinine with fresh frozen serum.

    PubMed

    Kang, Fengfeng; Zhang, Chuanbao; Wang, Wei; Wang, Zhiguo

    2016-01-01

    Six sigma provides an objective and quantitative methodology to describe the laboratory testing performance. In this study, we conducted a national trueness verification scheme with fresh frozen serum (FFS) for serum creatinine to evaluate its performance in China. Two different concentration levels of FFS, targeted with reference method, were sent to 98 laboratories in China. Imprecision and bias of the measurement procedure were calculated for each participant to further evaluate the sigma value. Quality goal index (QGI) analysis was used to investigate the reason of unacceptable performance for laboratories with σ < 3. Our study indicated that the sample with high concentration of creatinine had preferable sigma values. For the enzymatic method, 7.0% (5/71) to 45.1% (32/71) of the laboratories need to improve their measurement procedures (σ < 3). And for the Jaffe method, the percentages were from 11.5% (3/26) to 73.1% (19/26). QGI analysis suggested that most of the laboratories (62.5% for the enzymatic method and 68.4% for the Jaffe method) should make an effort to improve the trueness (QGI > 1.2). Only 3.1-5.3% of the laboratories should improve both of the precision and trueness. Sigma metric analysis of the serum creatinine assays is disappointing, which was mainly due to the unacceptable analytical bias according to the QGI analysis. Further effort is needed to enhance the trueness of the creatinine measurement.

  14. Are Current Physical Match Performance Metrics in Elite Soccer Fit for Purpose or is the Adoption of an Integrated Approach Needed?

    PubMed

    Bradley, Paul S; Ade, Jack D

    2018-01-18

    Time-motion analysis is a valuable data-collection technique used to quantify the physical match performance of elite soccer players. For over 40 years researchers have adopted a 'traditional' approach when evaluating match demands by simply reporting the distance covered or time spent along a motion continuum of walking through to sprinting. This methodology quantifies physical metrics in isolation without integrating other factors and this ultimately leads to a one-dimensional insight into match performance. Thus, this commentary proposes a novel 'integrated' approach that focuses on a sensitive physical metric such as high-intensity running but contextualizes this in relation to key tactical activities for each position and collectively for the team. In the example presented, the 'integrated' model clearly unveils the unique high-intensity profile that exists due to distinct tactical roles, rather than one-dimensional 'blind' distances produced by 'traditional' models. Intuitively this innovative concept may aid the coaches understanding of the physical performance in relation to the tactical roles and instructions given to the players. Additionally, it will enable practitioners to more effectively translate match metrics into training and testing protocols. This innovative model may well aid advances in other team sports that incorporate similar intermittent movements with tactical purpose. Evidence of the merits and application of this new concept are needed before the scientific community accepts this model as it may well add complexity to an area that conceivably needs simplicity.

  15. Sound quality evaluation of air conditioning sound rating metric

    NASA Astrophysics Data System (ADS)

    Hodgdon, Kathleen K.; Peters, Jonathan A.; Burkhardt, Russell C.; Atchley, Anthony A.; Blood, Ingrid M.

    2003-10-01

    A product's success can depend on its acoustic signature as much as on the product's performance. The consumer's perception can strongly influence their satisfaction with and confidence in the product. A metric that can rate the content of the spectrum, and predict its consumer preference, is a valuable tool for manufacturers. The current method of assessing acoustic signatures from residential air conditioning units is defined in the Air Conditioning and Refrigeration Institute (ARI 270) 1995 Standard for Sound Rating of Outdoor Unitary Equipment. The ARI 270 metric, and modified versions of that metric, were implemented in software with the flexibility to modify the features applied. Numerous product signatures were analyzed to generate a set of synthesized spectra that targeted spectral configurations that challenged the metric's abilities. A subjective jury evaluation was conducted to establish the consumer preference for those spectra. Statistical correlations were conducted to assess the degree of relationship between the subjective preferences and the various metric calculations. Recommendations were made for modifications to improve the current metric's ability to predict subjective preference. [Research supported by the Air Conditioning and Refrigeration Institute.

  16. Reference-free ground truth metric for metal artifact evaluation in CT images.

    PubMed

    Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M

    2011-07-01

    In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  17. The Metric System--An Overview.

    ERIC Educational Resources Information Center

    Hovey, Larry; Hovey, Kathi

    1983-01-01

    Sections look at: (1) Historical Perspective; (2) Naming the New System; (3) The Metric Units; (4) Measuring Larger and Smaller Amounts; (5) Advantage of Using the Metric System; (6) Metric Symbols; (7) Conversion from Metric to Customary System; (8) General Hints for Helping Children Understand; and (9) Current Status of Metric Conversion. (MP)

  18. Evaluation of motion artifact metrics for coronary CT angiography.

    PubMed

    Ma, Hongfeng; Gros, Eric; Szabo, Aniko; Baginski, Scott G; Laste, Zachary R; Kulkarni, Naveen M; Okerlund, Darin; Schmidt, Taly G

    2018-02-01

    This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best-phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground-truth motion artifact scores from a series of pairwise comparisons. Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low-Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine-filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground-truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground-truth reader score. The Kendall's Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. On phantom images, the Kendall's Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall's Tau signifies higher

  19. Observable traces of non-metricity: New constraints on metric-affine gravity

    NASA Astrophysics Data System (ADS)

    Delhom-Latorre, Adrià; Olmo, Gonzalo J.; Ronco, Michele

    2018-05-01

    Relaxing the Riemannian condition to incorporate geometric quantities such as torsion and non-metricity may allow to explore new physics associated with defects in a hypothetical space-time microstructure. Here we show that non-metricity produces observable effects in quantum fields in the form of 4-fermion contact interactions, thereby allowing us to constrain the scale of non-metricity to be greater than 1 TeV by using results on Bahbah scattering. Our analysis is carried out in the framework of a wide class of theories of gravity in the metric-affine approach. The bound obtained represents an improvement of several orders of magnitude to previous experimental constraints.

  20. Comparison of 3D displays using objective metrics

    NASA Astrophysics Data System (ADS)

    Havig, Paul; McIntire, John; Dixon, Sharon; Moore, Jason; Reis, George

    2008-04-01

    Previously, we (Havig, Aleva, Reis, Moore, and McIntire, 2007) presented a taxonomy for the development of three-dimensional (3D) displays. We proposed three levels of metrics: objective (in which physical measurements are made of the display), subjective (Likert-type rating scales to show preferences of the display), and subjective-objective (performance metrics in which one shows how the 3D display may be more or less useful than a 2D display or a different 3D display). We concluded that for each level of metric, drawing practical comparisons among currently disparate 3D displays is difficult. In this paper we attempt to define more clearly the objective metrics for 3D displays. We set out to collect and measure physical attributes of several 3D displays and compare the results. We discuss our findings in terms of both difficulties in making the measurements in the first place, due to the physical set-up of the display, to issues in comparing the results we found and comparing how similar (or dissimilar) two 3D displays may or may not be. We conclude by discussing the next steps in creating objective metrics for three-dimensional displays as well as a proposed way ahead for the other two levels of metrics based on our findings.

  1. Future of the PCI Readmission Metric.

    PubMed

    Wasfy, Jason H; Yeh, Robert W

    2016-03-01

    Between 2013 and 2014, the Centers for Medicare and Medicaid Services and the National Cardiovascular Data Registry publically reported risk-adjusted 30-day readmission rates after percutaneous coronary intervention (PCI) as a pilot project. A key strength of this public reporting effort included risk adjustment with clinical rather than administrative data. Furthermore, because readmission after PCI is common, expensive, and preventable, this metric has substantial potential to improve quality and value in American cardiology care. Despite this, concerns about the metric exist. For example, few PCI readmissions are caused by procedural complications, limiting the extent to which improved procedural technique can reduce readmissions. Also, similar to other readmission measures, PCI readmission is associated with socioeconomic status and race. Accordingly, the metric may unfairly penalize hospitals that care for underserved patients. Perhaps in the context of these limitations, Centers for Medicare and Medicaid Services has not yet included PCI readmission among metrics that determine Medicare financial penalties. Nevertheless, provider organizations may still wish to focus on this metric to improve value for cardiology patients. PCI readmission is associated with low-risk chest discomfort and patient anxiety. Therefore, patient education, improved triage mechanisms, and improved care coordination offer opportunities to minimize PCI readmissions. Because PCI readmission is common and costly, reducing PCI readmission offers provider organizations a compelling target to improve the quality of care, and also performance in contracts involve shared financial risk. © 2016 American Heart Association, Inc.

  2. Algal bioassessment metrics for wadeable streams and rivers of Maine, USA

    USGS Publications Warehouse

    Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth

    2011-01-01

    Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.

  3. Value-based metrics and Internet-based enterprises

    NASA Astrophysics Data System (ADS)

    Gupta, Krishan M.

    2001-10-01

    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  4. NASA metrication activities

    NASA Technical Reports Server (NTRS)

    Vlannes, P. N.

    1978-01-01

    NASA's organization and policy for metrification, history from 1964, NASA participation in Federal agency activities, interaction with nongovernmental metrication organizations, and the proposed metrication assessment study are reviewed.

  5. Foul tip impact attenuation of baseball catcher masks using head impact metrics

    PubMed Central

    White, Terrance R.; Cutcliffe, Hattie C.; Shridharani, Jay K.; Wood, Garrett W.; Bass, Cameron R.

    2018-01-01

    Currently, no scientific consensus exists on the relative safety of catcher mask styles and materials. Due to differences in mass and material properties, the style and material of a catcher mask influences the impact metrics observed during simulated foul ball impacts. The catcher surrogate was a Hybrid III head and neck equipped with a six degree of freedom sensor package to obtain linear accelerations and angular rates. Four mask styles were impacted using an air cannon for six 30 m/s and six 35 m/s impacts to the nasion. To quantify impact severity, the metrics peak linear acceleration, peak angular acceleration, Head Injury Criterion, Head Impact Power, and Gadd Severity Index were used. An Analysis of Covariance and a Tukey’s HSD Test were conducted to compare the least squares mean between masks for each head injury metric. For each injury metric a P-Value less than 0.05 was found indicating a significant difference in mask performance. Tukey’s HSD test found for each metric, the traditional style titanium mask fell in the lowest performance category while the hockey style mask was in the highest performance category. Limitations of this study prevented a direct correlation from mask testing performance to mild traumatic brain injury. PMID:29856814

  6. Comparing masked target transform volume (MTTV) clutter metric to human observer evaluation of visual clutter

    NASA Astrophysics Data System (ADS)

    Camp, H. A.; Moyer, Steven; Moore, Richard K.

    2010-04-01

    The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.

  7. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  8. Optimized technical and scientific design approach for high performance anticoincidence shields

    NASA Astrophysics Data System (ADS)

    Graue, Roland; Stuffler, Timo; Monzani, Franco; Bastia, Paolo; Gryksa, Werner; Pahl, Germit

    2018-04-01

    This paper, "Optimized technical and scientific design approach for high performance anticoincidence shields," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.

  9. Longitudinal Trend Analysis of Performance Indicators for South Carolina's Technical Colleges

    ERIC Educational Resources Information Center

    Hossain, Mohammad Nurul

    2010-01-01

    This study included an analysis of the trend of performance indicators for the technical college sector of higher education in South Carolina. In response to demands for accountability and transparency in higher education, the state of South Carolina developed sector specific performance indicators to measure various educational outcomes for each…

  10. Mastering Metrics

    ERIC Educational Resources Information Center

    Parrot, Annette M.

    2005-01-01

    By the time students reach a middle school science course, they are expected to make measurements using the metric system. However, most are not practiced in its use, as their experience in metrics is often limited to one unit they were taught in elementary school. This lack of knowledge is not wholly the fault of formal education. Although the…

  11. Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach

    PubMed Central

    de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio

    2015-01-01

    Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653

  12. Mental Fatigue Impairs Soccer-Specific Physical and Technical Performance.

    PubMed

    Smith, Mitchell R; Coutts, Aaron J; Merlini, Michele; Deprez, Dieter; Lenoir, Matthieu; Marcora, Samuele M

    2016-02-01

    To investigate the effects of mental fatigue on soccer-specific physical and technical performance. This investigation consisted of two separate studies. Study 1 assessed the soccer-specific physical performance of 12 moderately trained soccer players using the Yo-Yo Intermittent Recovery Test, Level 1 (Yo-Yo IR1). Study 2 assessed the soccer-specific technical performance of 14 experienced soccer players using the Loughborough Soccer Passing and Shooting Tests (LSPT, LSST). Each test was performed on two occasions and preceded, in a randomized, counterbalanced order, by 30 min of the Stroop task (mentally fatiguing treatment) or 30 min of reading magazines (control treatment). Subjective ratings of mental fatigue were measured before and after treatment, and mental effort and motivation were measured after treatment. Distance run, heart rate, and ratings of perceived exertion were recorded during the Yo-Yo IR1. LSPT performance time was calculated as original time plus penalty time. LSST performance was assessed using shot speed, shot accuracy, and shot sequence time. Subjective ratings of mental fatigue and effort were higher after the Stroop task in both studies (P < 0.001), whereas motivation was similar between conditions. This mental fatigue significantly reduced running distance in the Yo-Yo IR1 (P < 0.001). No difference in heart rate existed between conditions, whereas ratings of perceived exertion were significantly higher at iso-time in the mental fatigue condition (P < 0.01). LSPT original time and performance time were not different between conditions; however, penalty time significantly increased in the mental fatigue condition (P = 0.015). Mental fatigue also impaired shot speed (P = 0.024) and accuracy (P < 0.01), whereas shot sequence time was similar between conditions. Mental fatigue impairs soccer-specific running, passing, and shooting performance.

  13. The LSST Metrics Analysis Framework (MAF)

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Zeljko; Krughoff, K. Simon; Petry, Catherine E.; Ridgway, Stephen T.

    2015-01-01

    Studying potential observing strategies or cadences for the Large Synoptic Survey Telescope (LSST) is a complicated but important problem. To address this, LSST has created an Operations Simulator (OpSim) to create simulated surveys, including realistic weather and sky conditions. Analyzing the results of these simulated surveys for the wide variety of science cases to be considered for LSST is, however, difficult. We have created a Metric Analysis Framework (MAF), an open-source python framework, to be a user-friendly, customizable and easily extensible tool to help analyze the outputs of the OpSim.MAF reads the pointing history of the LSST generated by the OpSim, then enables the subdivision of these pointings based on position on the sky (RA/Dec, etc.) or the characteristics of the observations (e.g. airmass or sky brightness) and a calculation of how well these observations meet a specified science objective (or metric). An example simple metric could be the mean single visit limiting magnitude for each position in the sky; a more complex metric might be the expected astrometric precision. The output of these metrics can be generated for a full survey, for specified time intervals, or for regions of the sky, and can be easily visualized using a web interface.An important goal for MAF is to facilitate analysis of the OpSim outputs for a wide variety of science cases. A user can often write a new metric to evaluate OpSim for new science goals in less than a day once they are familiar with the framework. Some of these new metrics are illustrated in the accompanying poster, "Analyzing Simulated LSST Survey Performance With MAF".While MAF has been developed primarily for application to OpSim outputs, it can be applied to any dataset. The most obvious examples are examining pointing histories of other survey projects or telescopes, such as CFHT.

  14. Temporal Changes in Technical and Physical Performances During a Small-Sided Game in Elite Youth Soccer Players

    PubMed Central

    Moreira, Alexandre; Saldanha Aoki, Marcelo; Carling, Chris; Alan Rodrigues Lopes, Rafael; Felipe Schultz de Arruda, Ademir; Lima, Marcelo; Cesar Correa, Umberto; Bradley, Paul S

    2016-01-01

    Background There have been claims that small-sided games (SSG) may generate an appropriate environment to develop youth players’ technical performance associated to game-related problem solving. However, the temporal change in technical performance parameters of youth players during SSG is still unknown. Objectives The aim of this study was to examine temporal changes in technical and physical performances during a small-sided game (SSG) in elite soccer players. Methods Sixty elite youth players (age 14.8 ± 0.2 yr; stature 177 ± 5 cm; body mass 66.2 ± 4.7 kg) completed a 5 v 5 SSG using two repetitions of 8 minutes interspersed by 3 minutes of passive recovery. To evaluate temporal changes in performance, the data were analysed across 4 minutes quarters. Physical performance parameters included the total distance covered (TDC), the frequency of sprints (>18 km•h-1), accelerations and decelerations (> 2.0 m•s-2 and - 2.0 m•s-2), metabolic power (W•kg-1), training impulse (TRIMP), TDC: TRIMP, number of impacts, and body load. Technical performance parameters included goal attempts, total number of tackles, tackles and interceptions, total number of passes, and passes effectiveness. Results All physical performance parameters decreased from the first to the last quarter with notable declines in TDC, metabolic power and the frequency of sprints, accelerations and decelerations (P < 0.05; moderate to very large ES: 1.08 - 3.30). However, technical performance parameters did not vary across quarters (P > 0.05; trivial ES for 1st v 4th quarters: 0.15 - 0.33). Conclusions The data demonstrate that technical performance is maintained despite substantial declines in physical performance during a SSG in elite youth players. This finding may have implications for designing SSG’s for elite youth players to ensure physical, technical and tactical capabilities are optimized. Modifications in player number, pitch dimensions, rules, coach encouragement, for instance

  15. Temporal Changes in Technical and Physical Performances During a Small-Sided Game in Elite Youth Soccer Players.

    PubMed

    Moreira, Alexandre; Saldanha Aoki, Marcelo; Carling, Chris; Alan Rodrigues Lopes, Rafael; Felipe Schultz de Arruda, Ademir; Lima, Marcelo; Cesar Correa, Umberto; Bradley, Paul S

    2016-12-01

    There have been claims that small-sided games (SSG) may generate an appropriate environment to develop youth players' technical performance associated to game-related problem solving. However, the temporal change in technical performance parameters of youth players during SSG is still unknown. The aim of this study was to examine temporal changes in technical and physical performances during a small-sided game (SSG) in elite soccer players. Sixty elite youth players (age 14.8 ± 0.2 yr; stature 177 ± 5 cm; body mass 66.2 ± 4.7 kg) completed a 5 v 5 SSG using two repetitions of 8 minutes interspersed by 3 minutes of passive recovery. To evaluate temporal changes in performance, the data were analysed across 4 minutes quarters. Physical performance parameters included the total distance covered (TDC), the frequency of sprints (>18 km•h -1 ), accelerations and decelerations (> 2.0 m•s -2 and - 2.0 m•s -2 ), metabolic power (W•kg -1 ), training impulse (TRIMP), TDC: TRIMP, number of impacts, and body load. Technical performance parameters included goal attempts, total number of tackles, tackles and interceptions, total number of passes, and passes effectiveness. All physical performance parameters decreased from the first to the last quarter with notable declines in TDC, metabolic power and the frequency of sprints, accelerations and decelerations (P < 0.05; moderate to very large ES: 1.08 - 3.30). However, technical performance parameters did not vary across quarters (P > 0.05; trivial ES for 1st v 4th quarters: 0.15 - 0.33). The data demonstrate that technical performance is maintained despite substantial declines in physical performance during a SSG in elite youth players. This finding may have implications for designing SSG's for elite youth players to ensure physical, technical and tactical capabilities are optimized. Modifications in player number, pitch dimensions, rules, coach encouragement, for instance, should be included taking into account the

  16. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  17. Rank Order Entropy: why one metric is not enough

    PubMed Central

    McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.

    2011-01-01

    The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also

  18. Sigma Routing Metric for RPL Protocol.

    PubMed

    Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian

    2018-04-21

    This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.

  19. Sigma Routing Metric for RPL Protocol

    PubMed Central

    Rojas, Aldo; Fernandez, Luis

    2018-01-01

    This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524

  20. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C., E-mail: chholland@ucsd.edu

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less

  1. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  2. First results from a combined analysis of CERN computing infrastructure metrics

    NASA Astrophysics Data System (ADS)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  3. Not just trust: factors influencing learners' attempts to perform technical skills on real patients.

    PubMed

    Bannister, Susan L; Dolson, Mark S; Lingard, Lorelei; Keegan, David A

    2018-06-01

    As part of their training, physicians are required to learn how to perform technical skills on patients. The previous literature reveals that this learning is complex and that many opportunities to perform these skills are not converted into attempts to do so by learners. This study sought to explore and understand this phenomenon better. A multi-phased qualitative study including ethnographic observations, interviews and focus groups was conducted to explore the factors that influence technical skill learning. In a tertiary paediatric emergency department, staff physician preceptors, residents, nurses and respiratory therapists were observed in the delivery and teaching of technical skills over a 3-month period. A constant comparison methodology was used to analyse the data and to develop a constructivist grounded theory. We conducted 419 hours of observation, 18 interviews and four focus groups. We observed 287 instances of technical skills, of which 27.5% were attempted by residents. Thematic analysis identified 14 factors, grouped into three categories, which influenced whether residents attempted technical skills on real patients. Learner factors included resident initiative, perceived need for skill acquisition and competing priorities. Teacher factors consisted of competing priorities, interest in teaching, perceived need for residents to acquire skills, attributions about learners, assessments of competency, and trust. Environmental factors were competition from other learners, judgement that the patient was appropriate, buy-in from team members, consent from patient or caregivers, and physical environment constraints. Our findings suggest that neither the presence of a learner in a clinical environment nor the trust of the supervisor is sufficient to ensure the learner will attempt a technical skill. We characterise this phenomenon as representing a pool of opportunities to conduct technical skills on live patients that shrinks to a much smaller pool of

  4. Double metric, generalized metric, and α' -deformed double field theory

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-03-01

    We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.

  5. Resilient Control Systems Practical Metrics Basis for Defining Mission Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig G. Rieger

    "Resilience” describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish “proper operation” and “impact.” A notional chemical processing plant is the use case for demonstrating how the system integritymore » metrics can be applied to establish performance, and« less

  6. Evaluation of eye metrics as a detector of fatigue.

    PubMed

    McKinley, R Andy; McIntire, Lindsey K; Schmidt, Regina; Repperger, Daniel W; Caldwell, John A

    2011-08-01

    This study evaluated oculometrics as a detector of fatigue in Air Force-relevant tasks after sleep deprivation. Using the metrics of total eye closure duration (PERCLOS) and approximate entropy (ApEn), the relation between these eye metrics and fatigue-induced performance decrements was investigated. One damaging effect to the successful outcome of operational military missions is that attributed to sleep deprivation-induced fatigue. Consequently, there is interest in the development of reliable monitoring devices that can assess when an operator is overly fatigued. Ten civilian participants volunteered to serve in this study. Each was trained on three performance tasks: target identification, unmanned aerial vehicle landing, and the psychomotor vigilance task (PVT). Experimental testing began after 14 hr awake and continued every 2 hr until 28 hr of sleep deprivation was reached. Performance on the PVT and target identification tasks declined significantly as the level of sleep deprivation increased.These performance declines were paralleled more closely by changes in the ApEn compared to the PERCLOS measure. The results provide evidence that the ApEn eye metric can be used to detect fatigue in relevant military aviation tasks. Military and commercial operators could benefit from an alertness monitoring device.

  7. PSQM-based RR and NR video quality metrics

    NASA Astrophysics Data System (ADS)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  8. Metric Development for Continuous Process Improvement

    DTIC Science & Technology

    2011-03-01

    observable ( Cropley , 1998). All measurement is done within a context (Morse, 2003), which is shaped by a purpose, existing knowledge, capabilities, and...performance: metrics for entrepreneurship and strategic management research. Cheltenham, UK: Edward Elgar, 2006. Print. 9. Cropley , D. H., “Towards

  9. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    PubMed

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  10. Writing Performance Goals: Strategy and Prototypes. A Manual for Vocational and Technical Educators.

    ERIC Educational Resources Information Center

    McGraw-Hill Book Co., New York, NY. Gregg Div.

    The result of a cooperative project of the Center for Vocational and Technical Education at the Ohio State University and the McGraw-Hill Book Company, this manual was prepared to develop prototypes of performance goals for use by curriculum specialists and developers of instructional materials in vocational and technical education and to provide…

  11. Evaluation Metrics for Biostatistical and Epidemiological Collaborations

    PubMed Central

    Rubio, Doris McGartland; del Junco, Deborah J.; Bhore, Rafia; Lindsell, Christopher J.; Oster, Robert A.; Wittkowski, Knut M.; Welty, Leah J.; Li, Yi-Ju; DeMets, Dave

    2011-01-01

    Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. PMID:21284015

  12. Evaluation metrics for biostatistical and epidemiological collaborations.

    PubMed

    Rubio, Doris McGartland; Del Junco, Deborah J; Bhore, Rafia; Lindsell, Christopher J; Oster, Robert A; Wittkowski, Knut M; Welty, Leah J; Li, Yi-Ju; Demets, Dave

    2011-10-15

    Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Metrication study for large space telescope

    NASA Technical Reports Server (NTRS)

    Creswick, F. A.; Weller, A. E.

    1973-01-01

    Various approaches which could be taken in developing a metric-system design for the Large Space Telescope, considering potential penalties on development cost and time, commonality with other satellite programs, and contribution to national goals for conversion to the metric system of units were investigated. Information on the problems, potential approaches, and impacts of metrication was collected from published reports on previous aerospace-industry metrication-impact studies and through numerous telephone interviews. The recommended approach to LST metrication formulated in this study cells for new components and subsystems to be designed in metric-module dimensions, but U.S. customary practice is allowed where U.S. metric standards and metric components are not available or would be unsuitable. Electrical/electronic-system design, which is presently largely metric, is considered exempt from futher metrication. An important guideline is that metric design and fabrication should in no way compromise the effectiveness of the LST equipment.

  14. Application of Climate Impact Metrics to Rotorcraft Design

    NASA Technical Reports Server (NTRS)

    Russell, Carl; Johnson, Wayne

    2013-01-01

    Multiple metrics are applied to the design of large civil rotorcraft, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.

  15. Changing to the Metric System.

    ERIC Educational Resources Information Center

    Chambers, Donald L.; Dowling, Kenneth W.

    This report examines educational aspects of the conversion to the metric system of measurement in the United States. Statements of positions on metrication and basic mathematical skills are given from various groups. Base units, symbols, prefixes, and style of the metric system are outlined. Guidelines for teaching metric concepts are given,…

  16. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  17. Implementing the Metric System in Agricultural Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Gilmore, Hal M.; And Others

    Addressed to the agricultural education teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric" approach made up…

  18. Implementing the Metric System in Health Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Banks, Wilson P.; And Others

    Addressed to the health occupations education teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric" approach…

  19. The Relationship of Aptitudes to the Performance of Skilled Technical Jobs in Engine Manufacturing. Technical Report 1982-5 [and Supplement].

    ERIC Educational Resources Information Center

    Daniel, Mark; And Others

    A study examined the relationship of aptitudes to the performance of skilled technical jobs in engine manufacturing. During the study, several approaches were utilized, including criterion-referenced validation, taxonomic validation, construct validation, and detailed anlaysis of the behaviors involved in performing the jobs. The study sample…

  20. On the performance of metrics to predict quality in point cloud representations

    NASA Astrophysics Data System (ADS)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  1. Metrics in Career Education.

    ERIC Educational Resources Information Center

    Lindbeck, John R.

    The United States is rapidly becoming a metric nation. Industry, education, business, and government are all studying the issue of metrication to learn how they can prepare for it. The book is designed to help teachers and students in career education programs learn something about metrics. Presented in an easily understood manner, the textbook's…

  2. Fundamentals of neurosurgery: virtual reality tasks for training and evaluation of technical skills.

    PubMed

    Choudhury, Nusrat; Gélinas-Phaneuf, Nicholas; Delorme, Sébastien; Del Maestro, Rolando

    2013-11-01

    Technical skills training in neurosurgery is mostly done in the operating room. New educational paradigms are encouraging the development of novel training methods for surgical skills. Simulation could answer some of these needs. This article presents the development of a conceptual training framework for use on a virtual reality neurosurgical simulator. Appropriate tasks were identified by reviewing neurosurgical oncology curricula requirements and performing cognitive task analyses of basic techniques and representative surgeries. The tasks were then elaborated into training modules by including learning objectives, instructions, levels of difficulty, and performance metrics. Surveys and interviews were iteratively conducted with subject matter experts to delimitate, review, discuss, and approve each of the development stages. Five tasks were selected as representative of basic and advanced neurosurgical skill. These tasks were: 1) ventriculostomy, 2) endoscopic nasal navigation, 3) tumor debulking, 4) hemostasis, and 5) microdissection. The complete training modules were structured into easy, intermediate, and advanced settings. Performance metrics were also integrated to provide feedback on outcome, efficiency, and errors. The subject matter experts deemed the proposed modules as pertinent and useful for neurosurgical skills training. The conceptual framework presented here, the Fundamentals of Neurosurgery, represents a first attempt to develop standardized training modules for technical skills acquisition in neurosurgical oncology. The National Research Council Canada is currently developing NeuroTouch, a virtual reality simulator for cranial microneurosurgery. The simulator presently includes the five Fundamentals of Neurosurgery modules at varying stages of completion. A first pilot study has shown that neurosurgical residents obtained higher performance scores on the simulator than medical students. Further work will validate its components and use in a

  3. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The principal NASA metrication activities for FY 1989 were a revision of NASA metric policy and evaluation of the impact of using the metric system of measurement for the design and construction of the Space Station Freedom. Additional studies provided a basis for focusing follow-on activity. In FY 1990, emphasis will shift to implementation of metric policy and development of a long-range metrication plan. The report which follows addresses Policy Development, Planning and Program Evaluation, and Supporting Activities for the past and coming year.

  4. Implementing the Metric System in Business Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Retzer, Kenneth A.; And Others

    Addressed to the business education teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric" approach made up of…

  5. Implementing the Metric System in Industrial Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Retzer, Kenneth A.

    Addressed to the industrial education teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric" approach made up of…

  6. The metric system: An introduction

    NASA Astrophysics Data System (ADS)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  7. Bimanual Psychomotor Performance in Neurosurgical Resident Applicants Assessed Using NeuroTouch, a Virtual Reality Simulator.

    PubMed

    Winkler-Schwartz, Alexander; Bajunaid, Khalid; Mullah, Muhammad A S; Marwa, Ibrahim; Alotaibi, Fahad E; Fares, Jawad; Baggiani, Marta; Azarnoush, Hamed; Zharni, Gmaan Al; Christie, Sommer; Sabbagh, Abdulrahman J; Werthner, Penny; Del Maestro, Rolando F

    Current selection methods for neurosurgical residents fail to include objective measurements of bimanual psychomotor performance. Advancements in computer-based simulation provide opportunities to assess cognitive and psychomotor skills in surgically naive populations during complex simulated neurosurgical tasks in risk-free environments. This pilot study was designed to answer 3 questions: (1) What are the differences in bimanual psychomotor performance among neurosurgical residency applicants using NeuroTouch? (2) Are there exceptionally skilled medical students in the applicant cohort? and (3) Is there an influence of previous surgical exposure on surgical performance? Participants were instructed to remove 3 simulated brain tumors with identical visual appearance, stiffness, and random bleeding points. Validated tier 1, tier 2, and advanced tier 2 metrics were used to assess bimanual psychomotor performance. Demographic data included weeks of neurosurgical elective and prior operative exposure. This pilot study was carried out at the McGill Neurosurgical Simulation Research and Training Center immediately following neurosurgical residency interviews at McGill University, Montreal, Canada. All 17 medical students interviewed were asked to participate, of which 16 agreed. Performances were clustered in definable top, middle, and bottom groups with significant differences for all metrics. Increased time spent playing music, increased applicant self-evaluated technical skills, high self-ratings of confidence, and increased skin closures statistically influenced performance on univariate analysis. A trend for both self-rated increased operating room confidence and increased weeks of neurosurgical exposure to increased blood loss was seen in multivariate analysis. Simulation technology identifies neurosurgical residency applicants with differing levels of technical ability. These results provide information for studies being developed for longitudinal studies on the

  8. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    PubMed

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  9. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    PubMed

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients

  10. Attack-Resistant Trust Metrics

    NASA Astrophysics Data System (ADS)

    Levien, Raph

    The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.

  11. Bootstrapping Process Improvement Metrics: CMMI Level 4 Process Improvement Metrics in a Level 3 World

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Lewicki, Scott; Morgan, Scott

    2011-01-01

    The measurement techniques for organizations which have achieved the Software Engineering Institutes CMMI Maturity Levels 4 and 5 are well documented. On the other hand, how to effectively measure when an organization is Maturity Level 3 is less well understood, especially when there is no consistency in tool use and there is extensive tailoring of the organizational software processes. Most organizations fail in their attempts to generate, collect, and analyze standard process improvement metrics under these conditions. But at JPL, NASA's prime center for deep space robotic exploration, we have a long history of proving there is always a solution: It just may not be what you expected. In this paper we describe the wide variety of qualitative and quantitative techniques we have been implementing over the last few years, including the various approaches used to communicate the results to both software technical managers and senior managers.

  12. Quantitative application of sigma metrics in medical biochemistry.

    PubMed

    Nanda, Sunil Kumar; Ray, Lopamudra

    2013-12-01

    Laboratory errors are result of a poorly designed quality system in the laboratory. Six Sigma is an error reduction methodology that has been successfully applied at Motorola and General Electric. Sigma (σ) is the mathematical symbol for standard deviation (SD). Sigma methodology can be applied wherever an outcome of a process has to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). A six sigma process is one in which 99.999666% of the products manufactured are statistically expected to be free of defects. Six sigma concentrates, on regulating a process to 6 SDs, represents 3.4 DPM (defects per million) opportunities. It can be inferred that as sigma increases, the consistency and steadiness of the test improves, thereby reducing the operating costs. We aimed to gauge performance of our laboratory parameters by sigma metrics. Evaluation of sigma metrics in interpretation of parameter performance in clinical biochemistry. The six month internal QC (October 2012 to march 2013) and EQAS (external quality assurance scheme) were extracted for the parameters-Glucose, Urea, Creatinine, Total Bilirubin, Total Protein, Albumin, Uric acid, Total Cholesterol, Triglycerides, Chloride, SGOT, SGPT and ALP. Coefficient of variance (CV) were calculated from internal QC for these parameters. Percentage bias for these parameters was calculated from the EQAS. Total allowable errors were followed as per Clinical Laboratory Improvement Amendments (CLIA) guidelines. Sigma metrics were calculated from CV, percentage bias and total allowable error for the above mentioned parameters. For parameters - Total bilirubin, uric acid, SGOT, SGPT and ALP, the sigma values were found to be more than 6. For parameters - glucose, Creatinine, triglycerides, urea, the sigma values were found to be between 3 to 6. For parameters - total protein, albumin, cholesterol and chloride, the sigma values were found to be less than 3. ALP was the best

  13. Language Games: University Responses to Ranking Metrics

    ERIC Educational Resources Information Center

    Heffernan, Troy A.; Heffernan, Amanda

    2018-01-01

    League tables of universities that measure performance in various ways are now commonplace, with numerous bodies providing their own rankings of how institutions throughout the world are seen to be performing on a range of metrics. This paper uses Lyotard's notion of language games to theorise that universities are regaining some power over being…

  14. Metric Issues for Small Business.

    DTIC Science & Technology

    1981-08-01

    time, it seems that small business is meeting the problens (f ()ijutarv, metric conversion within its own resources ( manager ",a1, t, ,’bral, an...AD-AI07 861 UNITED STATES METRIC BOARD ARLINGTON VA FIG 5/3 METRIC ISSUES FOR SMALL BUSINESS . (U) UNCASIF AUG a I EHEHEi-17i i/ll/l///i//,° i MENNEN...METRIC ISSUES FOR SMALL BUSINESS EXECUTIVE SUMMARY DTIC ifELECT A This o@~ant )u enapo.I fW pu~~w Z. Ias ,!W% si~ts La-i UNITED STATES METRIC BOARD

  15. Uncertainty quantification of environmental performance metrics in heterogeneous aquifers with long-range correlations

    NASA Astrophysics Data System (ADS)

    Moslehi, Mahsa; de Barros, Felipe P. J.

    2017-01-01

    We investigate how the uncertainty stemming from disordered porous media that display long-range correlation in the hydraulic conductivity (K) field propagates to predictions of environmental performance metrics (EPMs). In this study, the EPMs are quantities that are of relevance to risk analysis and remediation, such as peak flux-averaged concentration, early and late arrival times among others. By using stochastic simulations, we quantify the uncertainty associated with the EPMs for a given disordered spatial structure of the K-field and identify the probability distribution function (PDF) model that best captures the statistics of the EPMs of interest. Results indicate that the probabilistic distribution of the EPMs considered in this study follows lognormal PDF. Finally, through the use of information theory, we reveal how the persistent/anti-persistent correlation structure of the K-field influences the EPMs and corresponding uncertainties.

  16. Multidisciplinary life cycle metrics and tools for green buildings.

    PubMed

    Helgeson, Jennifer F; Lippiatt, Barbara C

    2009-07-01

    Building sector stakeholders need compelling metrics, tools, data, and case studies to support major investments in sustainable technologies. Proponents of green building widely claim that buildings integrating sustainable technologies are cost effective, but often these claims are based on incomplete, anecdotal evidence that is difficult to reproduce and defend. The claims suffer from 2 main weaknesses: 1) buildings on which claims are based are not necessarily "green" in a science-based, life cycle assessment (LCA) sense and 2) measures of cost effectiveness often are not based on standard methods for measuring economic worth. Yet, the building industry demands compelling metrics to justify sustainable building designs. The problem is hard to solve because, until now, neither methods nor robust data supporting defensible business cases were available. The US National Institute of Standards and Technology (NIST) Building and Fire Research Laboratory is beginning to address these needs by developing metrics and tools for assessing the life cycle economic and environmental performance of buildings. Economic performance is measured with the use of standard life cycle costing methods. Environmental performance is measured by LCA methods that assess the "carbon footprint" of buildings, as well as 11 other sustainability metrics, including fossil fuel depletion, smog formation, water use, habitat alteration, indoor air quality, and effects on human health. Carbon efficiency ratios and other eco-efficiency metrics are established to yield science-based measures of the relative worth, or "business cases," for green buildings. Here, the approach is illustrated through a realistic building case study focused on different heating, ventilation, air conditioning technology energy efficiency. Additionally, the evolution of the Building for Environmental and Economic Sustainability multidisciplinary team and future plans in this area are described.

  17. Comparison of the Performance of Noise Metrics as Predictions of the Annoyance of Stage 2 and Stage 3 Aircraft Overflights

    NASA Technical Reports Server (NTRS)

    Pearsons, Karl S.; Howe, Richard R.; Sneddon, Matthew D.; Fidell, Sanford

    1996-01-01

    Thirty audiometrically screened test participants judged the relative annoyance of two comparison (variable level) and thirty-four standard (fixed level) signals in an adaptive paired comparison psychoacoustic study. The signal ensemble included both FAR Part 36 Stage 2 and 3 aircraft overflights, as well as synthesized aircraft noise signatures and other non-aircraft signals. All test signals were presented for judgment as heard indoors, in the presence of continuous background noise, under free-field listening conditions in an anechoic chamber. Analyses of the performance of 30 noise metrics as predictors of these annoyance judgments confirmed that the more complex metrics were generally more accurate and precise predictors than the simpler methods. EPNL was somewhat less accurate and precise as a predictor of the annoyance judgments than a duration-adjusted variant of Zwicker's Loudness Level.

  18. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  19. On the new metrics for IMRT QA verification.

    PubMed

    Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo

    2016-11-01

    . This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.

  20. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  1. Assessment of six dissimilarity metrics for climate analogues

    NASA Astrophysics Data System (ADS)

    Grenier, Patrick; Parent, Annie-Claude; Huard, David; Anctil, François; Chaumont, Diane

    2013-04-01

    Spatial analogue techniques consist in identifying locations whose recent-past climate is similar in some aspects to the future climate anticipated at a reference location. When identifying analogues, one key step is the quantification of the dissimilarity between two climates separated in time and space, which involves the choice of a metric. In this communication, spatial analogues and their usefulness are briefly discussed. Next, six metrics are presented (the standardized Euclidean distance, the Kolmogorov-Smirnov statistic, the nearest-neighbor distance, the Zech-Aslan energy statistic, the Friedman-Rafsky runs statistic and the Kullback-Leibler divergence), along with a set of criteria used for their assessment. The related case study involves the use of numerical simulations performed with the Canadian Regional Climate Model (CRCM-v4.2.3), from which three annual indicators (total precipitation, heating degree-days and cooling degree-days) are calculated over 30-year periods (1971-2000 and 2041-2070). Results indicate that the six metrics identify comparable analogue regions at a relatively large scale, but best analogues may differ substantially. For best analogues, it is also shown that the uncertainty stemming from the metric choice does generally not exceed that stemming from the simulation or model choice. A synthesis of the advantages and drawbacks of each metric is finally presented, in which the Zech-Aslan energy statistic stands out as the most recommended metric for analogue studies, whereas the Friedman-Rafsky runs statistic is the least recommended, based on this case study.

  2. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions

    NASA Astrophysics Data System (ADS)

    Gide, Milind S.; Karam, Lina J.

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.

  3. Visible contrast energy metrics for detection and discrimination

    NASA Astrophysics Data System (ADS)

    Ahumada, Albert J.; Watson, Andrew B.

    2013-03-01

    Contrast energy was proposed by Watson, Barlow, and Robson (Science, 1983) as a useful metric for representing luminance contrast target stimuli because it represents the detectability of the stimulus in photon noise for an ideal observer. We propose here the use of visible contrast energy metrics for detection and discrimination among static luminance patterns. The visibility is approximated with spatial frequency sensitivity weighting and eccentricity sensitivity weighting. The suggested weighting functions revise the Standard Spatial Observer (Watson and Ahumada, J. Vision, 2005) for luminance contrast detection , extend it into the near periphery, and provide compensation for duration. Under the assumption that the detection is limited only by internal noise, both detection and discrimination performance can be predicted by metrics based on the visible energy of the difference images.

  4. 7 CFR 1794.4 - Metric units.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... environmental documents using non-metric equivalents with one of the following two options; metric units in parentheses immediately following the non-metric equivalents or a metric conversion table as an appendix...

  5. 7 CFR 1794.4 - Metric units.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... environmental documents using non-metric equivalents with one of the following two options; metric units in parentheses immediately following the non-metric equivalents or a metric conversion table as an appendix...

  6. 7 CFR 1794.4 - Metric units.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... environmental documents using non-metric equivalents with one of the following two options; metric units in parentheses immediately following the non-metric equivalents or a metric conversion table as an appendix...

  7. 7 CFR 1794.4 - Metric units.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... environmental documents using non-metric equivalents with one of the following two options; metric units in parentheses immediately following the non-metric equivalents or a metric conversion table as an appendix...

  8. 7 CFR 1794.4 - Metric units.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... environmental documents using non-metric equivalents with one of the following two options; metric units in parentheses immediately following the non-metric equivalents or a metric conversion table as an appendix...

  9. Dynamic allocation of attention to metrical and grouping accents in rhythmic sequences.

    PubMed

    Kung, Shu-Jen; Tzeng, Ovid J L; Hung, Daisy L; Wu, Denise H

    2011-04-01

    Most people find it easy to perform rhythmic movements in synchrony with music, which reflects their ability to perceive the temporal periodicity and to allocate attention in time accordingly. Musicians and non-musicians were tested in a click localization paradigm in order to investigate how grouping and metrical accents in metrical rhythms influence attention allocation, and to reveal the effect of musical expertise on such processing. We performed two experiments in which the participants were required to listen to isochronous metrical rhythms containing superimposed clicks and then to localize the click on graphical and ruler-like representations with and without grouping structure information, respectively. Both experiments revealed metrical and grouping influences on click localization. Musical expertise improved the precision of click localization, especially when the click coincided with a metrically strong beat. Critically, although all participants located the click accurately at the beginning of an intensity group, only musicians located it precisely when it coincided with a strong beat at the end of the group. Removal of the visual cue of grouping structures enhanced these effects in musicians and reduced them in non-musicians. These results indicate that musical expertise not only enhances attention to metrical accents but also heightens sensitivity to perceptual grouping.

  10. Information-theoretic model comparison unifies saliency metrics

    PubMed Central

    Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias

    2015-01-01

    Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340

  11. Performance Metrics as Formal Structures and through the Lens of Social Mechanisms: When Do They Work and How Do They Influence?

    ERIC Educational Resources Information Center

    Colyvas, Jeannette A.

    2012-01-01

    Our current educational environment is subject to persistent calls for accountability, evidence-based practice, and data use for improvement, which largely take the form of performance metrics (PMs). This rapid proliferation of PMs has profoundly influenced the ways in which scholars and practitioners think about their own practices and the larger…

  12. Do-It-Yourself Metrics

    ERIC Educational Resources Information Center

    Klubeck, Martin; Langthorne, Michael; Padgett, Don

    2006-01-01

    Something new is on the horizon, and depending on one's role on campus, it might be storm clouds or a cleansing shower. Either way, no matter how hard one tries to avoid it, sooner rather than later he/she will have to deal with metrics. Metrics do not have to cause fear and resistance. Metrics can, and should, be a powerful tool for improvement.…

  13. Feasibility of and Rationale for the Collection of Orthopaedic Trauma Surgery Quality of Care Metrics.

    PubMed

    Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip

    2017-06-01

    Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.

  14. Key metrics for HFIR HEU and LEU models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Betzler, Benjamin R.; Chandler, David

    This report compares key metrics for two fuel design models of the High Flux Isotope Reactor (HFIR). The first model represents the highly enriched uranium (HEU) fuel currently in use at HFIR, and the second model considers a low-enriched uranium (LEU) interim design fuel. Except for the fuel region, the two models are consistent, and both include an experiment loading that is representative of HFIR's current operation. The considered key metrics are the neutron flux at the cold source moderator vessel, the mass of 252Cf produced in the flux trap target region as function of cycle time, the fast neutronmore » flux at locations of interest for material irradiation experiments, and the reactor cycle length. These key metrics are a small subset of the overall HFIR performance and safety metrics. They were defined as a means of capturing data essential for HFIR's primary missions, for use in optimization studies assessing the impact of HFIR's conversion from HEU fuel to different types of LEU fuel designs.« less

  15. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    PubMed

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were

  16. Hyperkahler metrics on focus-focus fibrations

    NASA Astrophysics Data System (ADS)

    Zhao, Jie

    In this thesis, we focus on the study of hyperkahler metric in four dimensional cases, and practice GMN's construction of hyperkahler metric on focus-focus fibrations. We explicitly compute the action-angle coordinates on the local model of focus-focus fibration, and show its semi-global invariant should be harmonic to admit a compatible holomorphic 2-form. Then we study the canonical semi-flat metric on it. After the instanton correction inspired by physics, we get a family of the generalized Ooguri-Vafa metric on focus-focus fibrations, which becomes more local examples of explicit hyperkahler metric in four dimensional cases. In addition, we also make some exploration of the Ooguri-Vafa metric in the thesis. We study the potential function of the Ooguri-Vafa metric, and prove that its nodal set is a cylinder of bounded radius 1 < R < 1. As a result, we get that only on a finite neighborhood of the singular fibre the Ooguri-Vafa metric is a hyperkahler metric. Finally, we give some estimates for the diameter of the fibration under the Oogui-Vafa metric, which confirms that the Oogui-Vafa metric is not complete. The new family of metric constructed in the thesis, we think, will provide more examples to further study of Lagrangian fibrations and mirror symmetry in future.

  17. Holographic Spherically Symmetric Metrics

    NASA Astrophysics Data System (ADS)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  18. Think Metric

    USGS Publications Warehouse

    ,

    1978-01-01

    The International System of Units, as the metric system is officially called, provides for a single "language" to describe weights and measures over the world. We in the United States together with the people of Brunei, Burma, and Yemen are the only ones who have not put this convenient system into effect. In the passage of the Metric Conversion Act of 1975, Congress determined that we also will adopt it, but the transition will be voluntary.

  19. Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol.

    PubMed

    Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing

    2009-02-01

    Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.

  20. Metrics for Operator Situation Awareness, Workload, and Performance in Automated Separation Assurance Systems

    NASA Technical Reports Server (NTRS)

    Strybel, Thomas Z.; Vu, Kim-Phuong L.; Battiste, Vernol; Dao, Arik-Quang; Dwyer, John P.; Landry, Steven; Johnson, Walter; Ho, Nhut

    2011-01-01

    A research consortium of scientists and engineers from California State University Long Beach (CSULB), San Jose State University Foundation (SJSUF), California State University Northridge (CSUN), Purdue University, and The Boeing Company was assembled to evaluate the impact of changes in roles and responsibilities and new automated technologies, being introduced in the Next Generation Air Transportation System (NextGen), on operator situation awareness (SA) and workload. To meet these goals, consortium members performed systems analyses of NextGen concepts and airspace scenarios, and concurrently evaluated SA, workload, and performance measures to assess their appropriateness for evaluations of NextGen concepts and tools. The following activities and accomplishments were supported by the NRA: a distributed simulation, metric development, systems analysis, part-task simulations, and large-scale simulations. As a result of this NRA, we have gained a greater understanding of situation awareness and its measurement, and have shared our knowledge with the scientific community. This network provides a mechanism for consortium members, colleagues, and students to pursue research on other topics in air traffic management and aviation, thus enabling them to make greater contributions to the field

  1. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  2. Semantic Metrics for Analysis of Software

    NASA Technical Reports Server (NTRS)

    Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara

    2005-01-01

    A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.

  3. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  4. Handbook of aircraft noise metrics

    NASA Technical Reports Server (NTRS)

    Bennett, R. L.; Pearsons, K. S.

    1981-01-01

    Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.

  5. Handbook of aircraft noise metrics

    NASA Astrophysics Data System (ADS)

    Bennett, R. L.; Pearsons, K. S.

    1981-03-01

    Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.

  6. Clean Cities 2010 Annual Metrics Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.

    2012-10-01

    This report details the petroleum savings and vehicle emissions reductions achieved by the U.S. Department of Energy's Clean Cities program in 2010. The report also details other performance metrics, including the number of stakeholders in Clean Cities coalitions, outreach activities by coalitions and national laboratories, and alternative fuel vehicles deployed.

  7. Clean Cities 2011 Annual Metrics Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.

    2012-12-01

    This report details the petroleum savings and vehicle emissions reductions achieved by the U.S. Department of Energy's Clean Cities program in 2011. The report also details other performance metrics, including the number of stakeholders in Clean Cities coalitions, outreach activities by coalitions and national laboratories, and alternative fuel vehicles deployed.

  8. INDOT Technical Training Plan : [Technical Summary

    DOT National Transportation Integrated Search

    2012-01-01

    A wide range of job classifications, increasing technical : performance expectations, licensing and certification requirements, : budget restrictions and frequent department : reorganization has made technical training of employees : more difficult, ...

  9. A metric for success

    NASA Astrophysics Data System (ADS)

    Carver, Gary P.

    1994-05-01

    The federal agencies are working with industry to ease adoption of the metric system. The goal is to help U.S. industry compete more successfully in the global marketplace, increase exports, and create new jobs. The strategy is to use federal procurement, financial assistance, and other business-related activities to encourage voluntary conversion. Based upon the positive experiences of firms and industries that have converted, federal agencies have concluded that metric use will yield long-term benefits that are beyond any one-time costs or inconveniences. It may be time for additional steps to move the Nation out of its dual-system comfort zone and continue to progress toward metrication. This report includes 'Metric Highlights in U.S. History'.

  10. New VHP-Female v. 2.0 full-body computational phantom and its performance metrics using FEM simulator ANSYS HFSS.

    PubMed

    Yanamadala, Janakinadh; Noetscher, Gregory M; Rathi, Vishal K; Maliye, Saili; Win, Htay A; Tran, Anh L; Jackson, Xavier J; Htet, Aung T; Kozlov, Mikhail; Nazarian, Ara; Louie, Sara; Makarov, Sergey N

    2015-01-01

    Simulation of the electromagnetic response of the human body relies heavily upon efficient computational models or phantoms. The first objective of this paper is to present a new platform-independent full-body electromagnetic computational model (computational phantom), the Visible Human Project(®) (VHP)-Female v. 2.0 and to describe its distinct features. The second objective is to report phantom simulation performance metrics using the commercial FEM electromagnetic solver ANSYS HFSS.

  11. Calabi-Yau metrics for quotients and complete intersections

    DOE PAGES

    Braun, Volker; Brelidze, Tamaz; Douglas, Michael R.; ...

    2008-05-22

    We extend previous computations of Calabi-Yau metrics on projective hypersurfaces to free quotients, complete intersections, and free quotients of complete intersections. In particular, we construct these metrics on generic quintics, four-generation quotients of the quintic, Schoen Calabi-Yau complete intersections and the quotient of a Schoen manifold with Z₃ x Z₃ fundamental group that was previously used to construct a heterotic standard model. Various numerical investigations into the dependence of Donaldson's algorithm on the integration scheme, as well as on the Kähler and complex structure moduli, are also performed.

  12. Questionable validity of the catheter-associated urinary tract infection metric used for value-based purchasing.

    PubMed

    Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K

    2015-10-01

    Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  13. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis

  14. Scholarly Metrics Baseline: A Survey of Faculty Knowledge, Use, and Opinion about Scholarly Metrics

    ERIC Educational Resources Information Center

    DeSanto, Dan; Nichols, Aaron

    2017-01-01

    This article presents the results of a faculty survey conducted at the University of Vermont during academic year 2014-2015. The survey asked faculty about: familiarity with scholarly metrics, metric-seeking habits, help-seeking habits, and the role of metrics in their department's tenure and promotion process. The survey also gathered faculty…

  15. Diagram of the Saturn V Launch Vehicle in Metric

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is a good cutaway diagram of the Saturn V launch vehicle showing the three stages, the instrument unit, and the Apollo spacecraft. The chart on the right presents the basic technical data in clear metric detail. The Saturn V is the largest and most powerful launch vehicle in the United States. The towering, 111 meter, Saturn V was a multistage, multiengine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams. Development of the Saturn V was the responsibility of the Marshall Space Flight Center at Huntsville, Alabama, directed by Dr. Wernher von Braun.

  16. Say "Yes" to Metric Measure.

    ERIC Educational Resources Information Center

    Monroe, Eula Ewing; Nelson, Marvin N.

    2000-01-01

    Provides a brief history of the metric system. Discusses the infrequent use of the metric measurement system in the United States, why conversion from the customary system to the metric system is difficult, and the need for change. (Contains 14 resources.) (ASK)

  17. Asset sustainability index : quick guide : proposed metrics for the long-term financial sustainability of highway networks.

    DOT National Transportation Integrated Search

    2013-04-01

    "This report provides a Quick Guide to the concept of asset sustainability metrics. Such metrics address the long-term performance of highway assets based upon expected expenditure levels. : It examines how such metrics are used in Australia, Britain...

  18. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The major NASA metrication activity of 1988 concerned the Space Station. Although the metric system was the baseline measurement system for preliminary design studies, solicitations for final design and development of the Space Station Freedom requested use of the inch-pound system because of concerns with cost impact and potential safety hazards. Under that policy, however use of the metric system would be permitted through waivers where its use was appropriate. Late in 1987, several Department of Defense decisions were made to increase commitment to the metric system, thereby broadening the potential base of metric involvement in the U.S. industry. A re-evaluation of Space Station Freedom units of measure policy was, therefore, initiated in January 1988.

  19. Toward objective image quality metrics: the AIC Eval Program of the JPEG

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Larabi, Chaker

    2008-08-01

    Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.

  20. Introduction to Metrics.

    ERIC Educational Resources Information Center

    Edgecomb, Philip L.; Shapiro, Marion

    Addressed to vocational, or academic middle or high school students, this book reviews mathematics fundamentals using metric units of measurement. It utilizes a common-sense approach to the degree of accuracy needed in solving actual trade and every-day problems. Stress is placed on reading off metric measurements from a ruler or tape, and on…

  1. Metric System Unit.

    ERIC Educational Resources Information Center

    Maggi, Gayle J. B.

    Thirty-six lessons for introducing the metric system are outlined. Appropriate grade level is not specified. The metric lessons suggested include 13 lessons on length, 7 lessons on mass, 11 lessons on capacity, and 5 lessons on temperature. Each lesson includes a list of needed materials, a statement of the lesson purpose, and suggested…

  2. Analysis of Subjects' Vulnerability in a Touch Screen Game Using Behavioral Metrics.

    PubMed

    Parsinejad, Payam; Sipahi, Rifat

    2017-12-01

    In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.

  3. Application of Climate Impact Metrics to Civil Tiltrotor Design

    NASA Technical Reports Server (NTRS)

    Russell, Carl R.; Johnson, Wayne

    2013-01-01

    Multiple metrics are applied to the design of a large civil tiltrotor, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.

  4. Implementing assessments of robot-assisted technical skill in urological education: a systematic review and synthesis of the validity evidence.

    PubMed

    Goldenberg, Mitchell G; Lee, Jason Y; Kwong, Jethro C C; Grantcharov, Teodor P; Costello, Anthony

    2018-03-31

    To systematically review and synthesise the validity evidence supporting intraoperative and simulation-based assessments of technical skill in urological robot-assisted surgery (RAS), and make evidence-based recommendations for the implementation of these assessments in urological training. A literature search of the Medline, PsycINFO and Embase databases was performed. Articles using technical skill and simulation-based assessments in RAS were abstracted. Only studies involving urology trainees or faculty were included in the final analysis. Multiple tools for the assessment of technical robotic skill have been published, with mixed sources of validity evidence to support their use. These evaluations have been used in both the ex vivo and in vivo settings. Performance evaluations range from global rating scales to psychometrics, and assessments are carried out through automation, expert analysts, and crowdsourcing. There have been rapid expansions in approaches to RAS technical skills assessment, both in simulated and clinical settings. Alternative approaches to assessment in RAS, such as crowdsourcing and psychometrics, remain under investigation. Evidence to support the use of these metrics in high-stakes decisions is likely insufficient at present. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  5. Some References on Metric Information.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC.

    This resource work lists metric information published by the U.S. Government and the American National Standards Institute. Also organizations marketing metric materials for education are given. A short table of conversions is included as is a listing of basic metric facts for everyday living. (LS)

  6. Person Re-Identification via Distance Metric Learning With Latent Variables.

    PubMed

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2017-01-01

    In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.

  7. Surveillance and Datalink Communication Performance Analysis for Distributed Separation Assurance System Architectures

    NASA Technical Reports Server (NTRS)

    Chung, William W.; Linse, Dennis J.; Alaverdi, Omeed; Ifarraguerri, Carlos; Seifert, Scott C.; Salvano, Dan; Calender, Dale

    2012-01-01

    This study investigates the effects of two technical enablers: Automatic Dependent Surveillance - Broadcast (ADS-B) and digital datalink communication, of the Federal Aviation Administration s Next Generation Air Transportation System (NextGen) under two separation assurance (SA) system architectures: ground-based SA and airborne SA, on overall separation assurance performance. Datalink performance such as successful reception probability in both surveillance and communication messages, and surveillance accuracy are examined in various operational conditions. Required SA performance is evaluated as a function of subsystem performance, using availability, continuity, and integrity metrics to establish overall required separation assurance performance, under normal and off-nominal conditions.

  8. Going Metric: Looking Ahead. Report of the Metrication Board for 1971.

    ERIC Educational Resources Information Center

    Metrication Board, London (England).

    Great Britain began changing to the metric system in 1965, in order to improve industrial efficiency and to increase its competitive strength in international trade. Despite internal and external pressures calling for acceleration of the rate of change, a loss of momentum in expanding use of metric standards was noted in 1971. In order to…

  9. Metrics. A Basic Core Curriculum for Teaching Metrics to Vocational Students.

    ERIC Educational Resources Information Center

    Albracht, James; Simmons, A. D.

    This core curriculum contains five units for use in teaching metrics to vocational students. Included in the first unit are a series of learning activities to familiarize students with the terminology of metrics, including the prefixes and their values. Measures of distance and speed are covered. Discussed next are measures of volume used with…

  10. Clinical Outcome Metrics for Optimization of Robust Training

    NASA Technical Reports Server (NTRS)

    Ebert, Doug; Byrne, Vicky; Cole, Richard; Dulchavsky, Scott; Foy, Millennia; Garcia, Kathleen; Gibson, Robert; Ham, David; Hurst, Victor; Kerstman, Eric; hide

    2015-01-01

    The objective of this research is to develop and use clinical outcome metrics and training tools to quantify the differences in performance of a physician vs non-physician crew medical officer (CMO) analogues during simulations.

  11. Metrication in a global environment

    NASA Technical Reports Server (NTRS)

    Aberg, J.

    1994-01-01

    A brief history about the development of the metric system of measurement is given. The need for the U.S. to implement the 'SI' metric system in the international markets, especially in the aerospace and general trade, is discussed. Development of metric implementation and experiences locally, nationally, and internationally are included.

  12. Metrication, American Style. Fastback 41.

    ERIC Educational Resources Information Center

    Izzi, John

    The purpose of this pamphlet is to provide a starting point of information on the metric system for any concerned or interested reader. The material is organized into five brief chapters: Man and Measurement; Learning the Metric System; Progress Report: Education; Recommended Sources; and Metrication, American Style. Appendixes include an…

  13. Metrics for covariate balance in cohort studies of causal effects.

    PubMed

    Franklin, Jessica M; Rassen, Jeremy A; Ackermann, Diana; Bartels, Dorothee B; Schneeweiss, Sebastian

    2014-05-10

    Inferring causation from non-randomized studies of exposure requires that exposure groups can be balanced with respect to prognostic factors for the outcome. Although there is broad agreement in the literature that balance should be checked, there is confusion regarding the appropriate metric. We present a simulation study that compares several balance metrics with respect to the strength of their association with bias in estimation of the effect of a binary exposure on a binary, count, or continuous outcome. The simulations utilize matching on the propensity score with successively decreasing calipers to produce datasets with varying covariate balance. We propose the post-matching C-statistic as a balance metric and found that it had consistently strong associations with estimation bias, even when the propensity score model was misspecified, as long as the propensity score was estimated with sufficient study size. This metric, along with the average standardized difference and the general weighted difference, outperformed all other metrics considered in association with bias, including the unstandardized absolute difference, Kolmogorov-Smirnov and Lévy distances, overlapping coefficient, Mahalanobis balance, and L1 metrics. Of the best-performing metrics, the C-statistic and general weighted difference also have the advantage that they automatically evaluate balance on all covariates simultaneously and can easily incorporate balance on interactions among covariates. Therefore, when combined with the usual practice of comparing individual covariate means and standard deviations across exposure groups, these metrics may provide useful summaries of the observed covariate imbalance. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Important LiDAR metrics for discriminating forest tree species in Central Europe

    NASA Astrophysics Data System (ADS)

    Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco

    2018-03-01

    Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.

  15. Health and Well-Being Metrics in Business: The Value of Integrated Reporting.

    PubMed

    Pronk, Nicolaas P; Malan, Daniel; Christie, Gillian; Hajat, Cother; Yach, Derek

    2018-01-01

    Health and well-being (HWB) are material to sustainable business performance. Yet, corporate reporting largely lacks the intentional inclusion of HWB metrics. This brief report presents an argument for inclusion of HWB metrics into existing standards for corporate reporting. A Core Scorecard and a Comprehensive Scorecard, designed by a team of subject matter experts, based on available evidence of effectiveness, and organized around the categories of Governance, Management, and Evidence of Success, may be integrated into corporate reporting efforts. Pursuit of corporate integrated reporting requires corporate governance and ethical leadership and values that ultimately align with environmental, social, and economic performance. Agreement on metrics that intentionally include HWB may allow for integrated reporting that has the potential to yield significant value for business and society alike.

  16. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  17. Cloud-based Computing and Applications of New Snow Metrics for Societal Benefit

    NASA Astrophysics Data System (ADS)

    Nolin, A. W.; Sproles, E. A.; Crumley, R. L.; Wilson, A.; Mar, E.; van de Kerk, M.; Prugh, L.

    2017-12-01

    Seasonal and interannual variability in snow cover affects socio-environmental systems including water resources, forest ecology, freshwater and terrestrial habitat, and winter recreation. We have developed two new seasonal snow metrics: snow cover frequency (SCF) and snow disappearance date (SDD). These metrics are calculated at 500-m resolution using NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data (MOD10A1). SCF is the number of times snow is observed in a pixel over the user-defined observation period. SDD is the last date of observed snow in a water year. These pixel-level metrics are calculated rapidly and globally in the Google Earth Engine cloud-based environment. SCF and SDD can be interactively visualized in a map-based interface, allowing users to explore spatial and temporal snowcover patterns from 2000-present. These metrics are especially valuable in regions where snow data are sparse or non-existent. We have used these metrics in several ongoing projects. When SCF was linked with a simple hydrologic model in the La Laguna watershed in northern Chile, it successfully predicted summer low flows with a Nash-Sutcliffe value of 0.86. SCF has also been used to help explain changes in Dall sheep populations in Alaska where sheep populations are negatively impacted by late snow cover and low snowline elevation during the spring lambing season. In forest management, SCF and SDD appear to be valuable predictors of post-wildfire vegetation growth. We see a positive relationship between winter SCF and subsequent summer greening for several years post-fire. For western US winter recreation, we are exploring trends in SDD and SCF for regions where snow sports are economically important. In a world with declining snowpacks and increasing uncertainty, these metrics extend across elevations and fill data gaps to provide valuable information for decision-making. SCF and SDD are being produced so that anyone with Internet access and a Google

  18. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    PubMed

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p < 0.0001). Gradient measures strongly correlated with target volume (p < 0.0001). The RTOG lung SBRT protocol advocated conformity guidelines for prescribed dose in all categories were met in ≥94% of cases. The proportion of total lung volume receiving doses of 20 Gy and 5 Gy (V 20 and V 5 ) were mean 4.8% (±3.2) and 16.4% (±9.2), respectively. Based on our study analyses, we recommend the following metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  19. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    PubMed

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p < 0.001, r = 0.73), LOS (p = 0.05, r = 0.52), and Foley duration (p < 0.001, r = 0.45). The five most relevant APMs, adopted by the RF-50 algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and

  20. A Complexity Metric for Automated Separation

    NASA Technical Reports Server (NTRS)

    Aweiss, Arwa

    2009-01-01

    A metric is proposed to characterize airspace complexity with respect to an automated separation assurance function. The Maneuver Option metric is a function of the number of conflict-free trajectory change options the automated separation assurance function is able to identify for each aircraft in the airspace at a given time. By aggregating the metric for all aircraft in a region of airspace, a measure of the instantaneous complexity of the airspace is produced. A six-hour simulation of Fort Worth Center air traffic was conducted to assess the metric. Results showed aircraft were twice as likely to be constrained in the vertical dimension than the horizontal one. By application of this metric, situations found to be most complex were those where level overflights and descending arrivals passed through or merged into an arrival stream. The metric identified high complexity regions that correlate well with current air traffic control operations. The Maneuver Option metric did not correlate with traffic count alone, a result consistent with complexity metrics for human-controlled airspace.

  1. Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org; University of Texas Houston Graduate School of Biomedical Science, Houston, Texas; Yang, Jinzhong

    Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method wasmore » validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced

  2. Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs. Research Report. ETS RR-17-20

    ERIC Educational Resources Information Center

    Ramanarayanan, Vikram; Lange, Patrick; Evanini, Keelan; Molloy, Hillary; Tsuprun, Eugene; Qian, Yao; Suendermann-Oeft, David

    2017-01-01

    Predicting and analyzing multimodal dialog user experience (UX) metrics, such as overall call experience, caller engagement, and latency, among other metrics, in an ongoing manner is important for evaluating such systems. We investigate automated prediction of multiple such metrics collected from crowdsourced interactions with an open-source,…

  3. Increasing Army Supply Chain Performance: Using an Integrated End to End Metrics System

    DTIC Science & Technology

    2017-01-01

    Sched Deliver Sched Delinquent Contracts Current Metrics PQDR/SDRs Forecasting Accuracy Reliability Demand Management Asset Mgmt Strategies Pipeline...are identified and characterized by statistical analysis. The study proposed a framework and tool for inventory management based on factors such as

  4. What are the Ingredients of a Scientifically and Policy-Relevant Hydrologic Connectivity Metric?

    NASA Astrophysics Data System (ADS)

    Ali, G.; English, C.; McCullough, G.; Stainton, M.

    2014-12-01

    While the concept of hydrologic connectivity is of significant importance to both researchers and policy makers, there is no consensus on how to express it in quantitative terms. This lack of consensus was further exacerbated by recent rulings of the U.S. Supreme Court that rely on the idea of "significant nexuses": critical degrees of landscape connectivity now have to be demonstrated to warrant environmental protection under the Clean Water Act. Several indicators of connectivity have been suggested in the literature, but they are often computationally intensive and require soil water content information, a requirement that makes them inapplicable over large, data-poor areas for which management decisions are needed. Here our objective was to assess the extent to which the concept of connectivity could become more operational by: 1) drafting a list of potential, watershed-scale connectivity metrics; 2) establishing a list of criteria for ranking the performance of those metrics; 3) testing them in various landscapes. Our focus was on a dozen agricultural Prairie watersheds where the interaction between near-level topography, perennial and intermittent streams, pothole wetlands and man-made drains renders the estimation of connectivity difficult. A simple procedure was used to convert RADARSAT images, collected between 1997 and 2011, into binary maps of saturated versus non-saturated areas. Several pattern-based and graph-theoretic metrics were then computed for a dynamic assessment of connectivity. The metrics performance was compared with regards to their sensitivity to antecedent precipitation, their correlation with watershed discharge, and their ability to portray aggregation effects. Results show that no single connectivity metric could satisfy all our performance criteria. Graph-theoretic metrics however seemed to perform better in pothole-dominated watersheds, thus highlighting the need for region-specific connectivity assessment frameworks.

  5. Elementary Metric Curriculum - Project T.I.M.E. (Timely Implementation of Metric Education). Part I.

    ERIC Educational Resources Information Center

    Community School District 18, Brooklyn, NY.

    This is a teacher's manual for an ISS-based elementary school course in the metric system. Behavioral objectives and student activities are included. The topics covered include: (1) linear measurement; (2) metric-decimal relationships; (3) metric conversions; (4) geometry; (5) scale drawings; and (6) capacity. This is the first of a two-part…

  6. Learning curves for transapical transcatheter aortic valve replacement in the PARTNER-I trial: Technical performance, success, and safety.

    PubMed

    Suri, Rakesh M; Minha, Sa'ar; Alli, Oluseun; Waksman, Ron; Rihal, Charanjit S; Satler, Lowell P; Greason, Kevin L; Torguson, Rebecca; Pichard, Augusto D; Mack, Michael; Svensson, Lars G; Rajeswaran, Jeevanantham; Lowry, Ashley M; Ehrlinger, John; Mick, Stephanie L; Tuzcu, E Murat; Thourani, Vinod H; Makkar, Raj; Holmes, David; Leon, Martin B; Blackstone, Eugene H

    2016-09-01

    Introduction of hybrid techniques, such as transapical transcatheter aortic valve replacement (TA-TAVR), requires skills that a heart team must master to achieve technical efficiency: the technical performance learning curve. To date, the learning curve for TA-TAVR remains unknown. We therefore evaluated the rate at which technical performance improved, assessed change in occurrence of adverse events in relation to technical performance, and determined whether adverse events after TA-TAVR were linked to acquiring technical performance efficiency (the learning curve). From April 2007 to February 2012, 1100 patients, average age 85.0 ± 6.4 years, underwent TA-TAVR in the PARTNER-I trial. Learning curves were defined by institution-specific patient sequence number using nonlinear mixed modeling. Mean procedure time decreased from 131 to 116 minutes within 30 cases (P = .06) and device success increased to 90% by case 45 (P = .0007). Within 30 days, 354 patients experienced a major adverse event (stroke in 29, death in 96), with possibly decreased complications over time (P ∼ .08). Although longer procedure time was associated with more adverse events (P < .0001), these events were associated with change in patient risk profile, not the technical performance learning curve (P = .8). The learning curve for TA-TAVR was 30 to 45 procedures performed, and technical efficiency was achieved without compromising patient safety. Although fewer patients are now undergoing TAVR via nontransfemoral access, understanding TA-TAVR learning curves and their relationship with outcomes is important as the field moves toward next-generation devices, such as those to replace the mitral valve, delivered via the left ventricular apex. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  7. 14 CFR 1274.206 - Metric Conversion Act.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... metric measurement system is stated in NPD 8010.2, Use of the Metric System of Measurement in NASA... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Metric Conversion Act. 1274.206 Section... WITH COMMERCIAL FIRMS Pre-Award Requirements § 1274.206 Metric Conversion Act. The Metric Conversion...

  8. 14 CFR 1274.206 - Metric Conversion Act.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... metric measurement system is stated in NPD 8010.2, Use of the Metric System of Measurement in NASA... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Metric Conversion Act. 1274.206 Section... WITH COMMERCIAL FIRMS Pre-Award Requirements § 1274.206 Metric Conversion Act. The Metric Conversion...

  9. Coaching Non-technical Skills Improves Surgical Residents' Performance in a Simulated Operating Room.

    PubMed

    Yule, Steven; Parker, Sarah Henrickson; Wilkinson, Jill; McKinley, Aileen; MacDonald, Jamie; Neill, Adrian; McAdam, Tim

    2015-01-01

    To investigate the effect of coaching on non-technical skills and performance during laparoscopic cholecystectomy in a simulated operating room (OR). Non-technical skills (situation awareness, decision making, teamwork, and leadership) underpin technical ability and are critical to the success of operations and the safety of patients in the OR. The rate of developing assessment tools in this area has outpaced development of workable interventions to improve non-technical skills in surgical training and beyond. A randomized trial was conducted with senior surgical residents (n = 16). Participants were randomized to receive either non-technical skills coaching (intervention) or to self-reflect (control) after each of 5 simulated operations. Coaching was based on the Non-Technical Skills For Surgeons (NOTSS) behavior observation system. Surgeon-coaches trained in this method coached participants in the intervention group for 10 minutes after each simulation. Primary outcome measure was non-technical skills, assessed from video by a surgeon using the NOTSS system. Secondary outcomes were time to call for help during bleeding, operative time, and path length of laparoscopic instruments. Non-technical skills improved in the intervention group from scenario 1 to scenario 5 compared with those in the control group (p = 0.04). The intervention group was faster to call for help when faced with unstoppable bleeding in the final scenario (no. 5; p = 0.03). Coaching improved residents' non-technical skills in the simulated OR compared with those in the control group. Important next steps are to implement non-technical skills coaching in the real OR and assess effect on clinically important process measures and patient outcomes. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  10. Ozone (O3) Standards - Other Technical Documents from the Review Completed in 2015

    EPA Pesticide Factsheets

    These memoranda were each sent in to the Ozone NAAQS Review Docket, EPA-HQ-OAR-2008-0699, after the proposed rule was published. They present technical data on the methods, monitoring stations, and metrics used to estimate ozone concentrations.

  11. Evaluation of image quality metrics for the prediction of subjective best focus.

    PubMed

    Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S

    2010-03-01

    Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.

  12. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  13. Intraoperative adverse events can be compensated by technical performance in neonates and infants after cardiac surgery: a prospective study.

    PubMed

    Nathan, Meena; Karamichalis, John M; Liu, Hua; del Nido, Pedro; Pigula, Frank; Thiagarajan, Ravi; Bacha, Emile A

    2011-11-01

    Our objective was to define the relationship between surgical technical performance score, intraoperative adverse events, and major postoperative adverse events in complex pediatric cardiac repairs. Infants younger than 6 months were prospectively followed up until discharge from the hospital. Technical performance scores were graded as optimal, adequate, or inadequate based on discharge echocardiograms and need for reintervention after initial surgery. Case complexity was determined by Risk Adjustment in Congenital Heart Surgery (RACHS-1) category, and preoperative illness severity was assessed by Pediatric Risk of Mortality (PRISM) III score. Intraoperative adverse events were prospectively monitored. Outcomes were analyzed using nonparametric methods and a logistic regression model. A total of 166 patients (RACHS 4-6 [49%]), neonates [50%]) were observed. Sixty-one (37%) had at least 1 intraoperative adverse event, and 47 (28.3%) had at least 1 major postoperative adverse event. There was no correlation between intraoperative adverse events and RACHS, preoperative PRISM III, technical performance score, or postoperative adverse events on multivariate analysis. For the entire cohort, better technical performance score resulted in lower postoperative adverse events, lower postoperative PRISM, and lower length of stay and ventilation time (P < .001). Patients requiring intraoperative revisions fared as well as patients without, provided the technical score was at least adequate. In neonatal and infant open heart repairs, technical performance score is one of the main predictors of postoperative morbidity. Outcomes are not affected by intraoperative adverse events, including surgical revisions, provided technical performance score is at least adequate. Copyright © 2011 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  14. 42 CFR 493.1409 - Condition: Laboratories performing moderate complexity testing; technical consultant.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Condition: Laboratories performing moderate complexity testing; technical consultant. 493.1409 Section 493.1409 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION...

  15. 42 CFR 493.1409 - Condition: Laboratories performing moderate complexity testing; technical consultant.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Condition: Laboratories performing moderate complexity testing; technical consultant. 493.1409 Section 493.1409 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION...

  16. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  17. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes

    PubMed Central

    Hockberger, Philip E.; Meyn, Susan M.; Nicklin, Connie; Tabarini, Diane; Auger, Julie A.

    2016-01-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel. PMID:26848284

  18. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    PubMed

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel.

  19. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  20. Technical match characteristics and influence of body anthropometry on playing performance in male elite team handball.

    PubMed

    Michalsik, Lars Bojsen; Madsen, Klavs; Aagaard, Per

    2015-02-01

    Modern team handball match-play imposes substantial physical and technical demands on elite players. However, only limited knowledge seems to exist about the specific working requirements in elite team handball. Thus, the purpose of this study was to examine the physical demands imposed on male elite team handball players in relation to playing position and body anthropometry. Based on continuous video recording of individual players during elite team handball match-play (62 tournament games, ∼4 players per game), computerized technical match analysis was performed in male elite team handball players along with anthropometric measurements over a 6 season time span. Technical match activities were distributed in 6 major types of playing actions (shots, breakthroughs, fast breaks, tackles, technical errors, and defense errors) and further divided into various subcategories (e.g., hard or light tackles, type of shot, claspings, screenings, and blockings). Players showed 36.9 ± 13.1 (group mean ± SD) high-intense technical playing actions per match with a mean total effective playing time of 53.85 ± 5.87 minutes. In offense, each player performed 6.0 ± 5.2 fast breaks, received 34.5 ± 21.3 tackles in total, and performed in defense 3.7 ± 3.5 blockings, 3.9 ± 3.0 claspings, and 5.8 ± 3.6 hard tackles. Wing players (84.5 ± 5.8 kg, 184.9 ± 5.7 cm) were less heavy and smaller (p < 0.001) than backcourt players (94.7 ± 7.1 kg, 191.9 ± 5.4 cm) and pivots (99.4 ± 6.2 kg, 194.8 ± 3.6 cm). In conclusion, modern male elite team handball match-play is characterized by a high number of short-term, high-intense intermittent technical playing actions. Indications of technical fatigue were observed. Physical demands differed between playing positions with wing players performing more fast breaks and less physical confrontations with opponent players than backcourt players and pivots. Body anthropometry seemed to have an important influence on playing performance

  1. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  2. Biomechanical metrics of aesthetic perception in dance.

    PubMed

    Bronner, Shaw; Shippen, James

    2015-12-01

    The brain may be tuned to evaluate aesthetic perception through perceptual chunking when we observe the grace of the dancer. We modelled biomechanical metrics to explain biological determinants of aesthetic perception in dance. Eighteen expert (EXP) and intermediate (INT) dancers performed développé arabesque in three conditions: (1) slow tempo, (2) slow tempo with relevé, and (3) fast tempo. To compare biomechanical metrics of kinematic data, we calculated intra-excursion variability, principal component analysis (PCA), and dimensionless jerk for the gesture limb. Observers, all trained dancers, viewed motion capture stick figures of the trials and ranked each for aesthetic (1) proficiency and (2) movement smoothness. Statistical analyses included group by condition repeated-measures ANOVA for metric data; Mann-Whitney U rank and Friedman's rank tests for nonparametric rank data; Spearman's rho correlations to compare aesthetic rankings and metrics; and linear regression to examine which metric best quantified observers' aesthetic rankings, p < 0.05. The goodness of fit of the proposed models was determined using Akaike information criteria. Aesthetic proficiency and smoothness rankings of the dance movements revealed differences between groups and condition, p < 0.0001. EXP dancers were rated more aesthetically proficient than INT dancers. The slow and fast conditions were judged more aesthetically proficient than slow with relevé (p < 0.0001). Of the metrics, PCA best captured the differences due to group and condition. PCA also provided the most parsimonious model to explain aesthetic proficiency and smoothness rankings. By permitting organization of large data sets into simpler groupings, PCA may mirror the phenomenon of chunking in which the brain combines sensory motor elements into integrated units of behaviour. In this representation, the chunk of information which is remembered, and to which the observer reacts, is the elemental mode shape of

  3. Converting Residential Drawing Courses to Metric.

    ERIC Educational Resources Information Center

    Goetsch, David L.

    1980-01-01

    Describes the process of metric conversion in residential drafting courses. Areas of concern are metric paper sizes; metric scale; plot, foundation, floor and electric plans; wall sections; elevations; and heat loss/ heat gain calculations. (SK)

  4. Evaluation of Vehicle-Based Crash Severity Metrics.

    PubMed

    Tsoi, Ada H; Gabler, Hampton C

    2015-01-01

    Vehicle change in velocity (delta-v) is a widely used crash severity metric used to estimate occupant injury risk. Despite its widespread use, delta-v has several limitations. Of most concern, delta-v is a vehicle-based metric which does not consider the crash pulse or the performance of occupant restraints, e.g. seatbelts and airbags. Such criticisms have prompted the search for alternative impact severity metrics based upon vehicle kinematics. The purpose of this study was to assess the ability of the occupant impact velocity (OIV), acceleration severity index (ASI), vehicle pulse index (VPI), and maximum delta-v (delta-v) to predict serious injury in real world crashes. The study was based on the analysis of event data recorders (EDRs) downloaded from the National Automotive Sampling System / Crashworthiness Data System (NASS-CDS) 2000-2013 cases. All vehicles in the sample were GM passenger cars and light trucks involved in a frontal collision. Rollover crashes were excluded. Vehicles were restricted to single-event crashes that caused an airbag deployment. All EDR data were checked for a successful, completed recording of the event and that the crash pulse was complete. The maximum abbreviated injury scale (MAIS) was used to describe occupant injury outcome. Drivers were categorized into either non-seriously injured group (MAIS2-) or seriously injured group (MAIS3+), based on the severity of any injuries to the thorax, abdomen, and spine. ASI and OIV were calculated according to the Manual for Assessing Safety Hardware. VPI was calculated according to ISO/TR 12353-3, with vehicle-specific parameters determined from U.S. New Car Assessment Program crash tests. Using binary logistic regression, the cumulative probability of injury risk was determined for each metric and assessed for statistical significance, goodness-of-fit, and prediction accuracy. The dataset included 102,744 vehicles. A Wald chi-square test showed each vehicle-based crash severity metric

  5. Image characterization metrics for muon tomography

    NASA Astrophysics Data System (ADS)

    Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt

    2014-05-01

    Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.

  6. Applying graphs and complex networks to football metric interpretation.

    PubMed

    Arriaza-Ardiles, E; Martín-González, J M; Zuniga, M D; Sánchez-Flores, J; de Saa, Y; García-Manso, J M

    2018-02-01

    This work presents a methodology for analysing the interactions between players in a football team, from the point of view of graph theory and complex networks. We model the complex network of passing interactions between players of a same team in 32 official matches of the Liga de Fútbol Profesional (Spain), using a passing/reception graph. This methodology allows us to understand the play structure of the team, by analysing the offensive phases of game-play. We utilise two different strategies for characterising the contribution of the players to the team: the clustering coefficient, and centrality metrics (closeness and betweenness). We show the application of this methodology by analyzing the performance of a professional Spanish team according to these metrics and the distribution of passing/reception in the field. Keeping in mind the dynamic nature of collective sports, in the future we will incorporate metrics which allows us to analyse the performance of the team also according to the circumstances of game-play and to different contextual variables such as, the utilisation of the field space, the time, and the ball, according to specific tactical situations. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. 42 CFR 493.1447 - Condition: Laboratories performing high complexity testing; technical supervisor.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Condition: Laboratories performing high complexity testing; technical supervisor. 493.1447 Section 493.1447 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY...

  8. 42 CFR 493.1447 - Condition: Laboratories performing high complexity testing; technical supervisor.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Condition: Laboratories performing high complexity testing; technical supervisor. 493.1447 Section 493.1447 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY...

  9. The Value of Metrics for Science Data Center Management

    NASA Astrophysics Data System (ADS)

    Moses, J.; Behnke, J.; Watts, T. H.; Lu, Y.

    2005-12-01

    The Earth Observing System Data and Information System (EOSDIS) has been collecting and analyzing records of science data archive, processing and product distribution for more than 10 years. The types of information collected and the analysis performed has matured and progressed to become an integral and necessary part of the system management and planning functions. Science data center managers are realizing the importance that metrics can play in influencing and validating their business model. New efforts focus on better understanding of users and their methods. Examples include tracking user web site interactions and conducting user surveys such as the government authorized American Customer Satisfaction Index survey. This paper discusses the metrics methodology, processes and applications that are growing in EOSDIS, the driving requirements and compelling events, and the future envisioned for metrics as an integral part of earth science data systems.

  10. 48 CFR 611.002-70 - Metric system implementation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... information or comparison. Hard metric means the use of only standard metric (SI) measurements in specifications, standards, supplies and services. Hybrid system means the use of both traditional and hard metric... possible. Alternatives to hard metric are soft, dual and hybrid metric terms. The Metric Handbook for...

  11. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  12. A Sensor-Independent Gust Hazard Metric

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2001-01-01

    A procedure for calculating an intuitive hazard metric for gust effects on airplanes is described. The hazard metric is for use by pilots and is intended to replace subjective pilot reports (PIREPs) of the turbulence level. The hazard metric is composed of three numbers: the first describes the average airplane response to the turbulence, the second describes the positive peak airplane response to the gusts, and the third describes the negative peak airplane response to the gusts. The hazard metric is derived from any time history of vertical gust measurements and is thus independent of the sensor making the gust measurements. The metric is demonstrated for one simulated airplane encountering different types of gusts including those derived from flight data recorder measurements of actual accidents. The simulated airplane responses to the gusts compare favorably with the hazard metric.

  13. Learning Compositional Shape Models of Multiple Distance Metrics by Information Projection.

    PubMed

    Luo, Ping; Lin, Liang; Liu, Xiaobai

    2016-07-01

    This paper presents a novel compositional contour-based shape model by incorporating multiple distance metrics to account for varying shape distortions or deformations. Our approach contains two key steps: 1) contour feature generation and 2) generative model pursuit. For each category, we first densely sample an ensemble of local prototype contour segments from a few positive shape examples and describe each segment using three different types of distance metrics. These metrics are diverse and complementary with each other to capture various shape deformations. We regard the parameterized contour segment plus an additive residual ϵ as a basic subspace, namely, ϵ -ball, in the sense that it represents local shape variance under the certain distance metric. Using these ϵ -balls as features, we then propose a generative learning algorithm to pursue the compositional shape model, which greedily selects the most representative features under the information projection principle. In experiments, we evaluate our model on several public challenging data sets, and demonstrate that the integration of multiple shape distance metrics is capable of dealing various shape deformations, articulations, and background clutter, hence boosting system performance.

  14. Aluminum-Mediated Formation of Cyclic Carbonates: Benchmarking Catalytic Performance Metrics.

    PubMed

    Rintjema, Jeroen; Kleij, Arjan W

    2017-03-22

    We report a comparative study on the activity of a series of fifteen binary catalysts derived from various reported aluminum-based complexes. A benchmarking of their initial rates in the coupling of various terminal and internal epoxides in the presence of three different nucleophilic additives was carried out, providing for the first time a useful comparison of activity metrics in the area of cyclic organic carbonate formation. These investigations provide a useful framework for how to realistically valorize relative reactivities and which features are important when considering the ideal operational window of each binary catalyst system. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Distance Metric Tracking

    DTIC Science & Technology

    2016-03-02

    some close- ness constant and dissimilar pairs be more distant than some larger constant. Online and non -linear extensions to the ITML methodology are...is obtained, instead of solving an objective function formed from the entire dataset. Many online learning methods have regret guarantees, that is... function Metric learning seeks to learn a metric that encourages data points marked as similar to be close and data points marked as different to be far

  16. Study of the Ernst metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esteban, E.P.

    In this thesis some properties of the Ernst metric are studied. This metric could provide a model for a Schwarzschild black hole immersed in a magnetic field. In chapter I, some standard propertiess of the Ernst's metric such as the affine connections, the Riemann, the Ricci, and the Weyl conformal tensor are calculated. In chapter II, the geodesics described by test particles in the Ernst space-time are studied. As an application a formula for the perihelion shift is derived. In the last chapter a null tetrad analysis of the Ernst metric is carried out and the resulting formalism applied tomore » the study of three problems. First, the algebraic classification of the Ernst metric is determined to be of type I in the Petrov scheme. Secondly, an explicit formula for the Gaussian curvature for the event horizon is derived. Finally, the form of the electromagnetic field is evaluated.« less

  17. How Soon Will We Measure in Metric?

    ERIC Educational Resources Information Center

    Weaver, Kenneth F.

    1977-01-01

    A brief history of measurement systems beginning with the Egyptians and Babylonians is given, ending with a discussion of the metric system and its adoption by the United States. Tables of metric prefixes, metric units, and common metric conversions are included. (MN)

  18. Use of diagnostic accuracy as a metric for evaluating laboratory proficiency with microarray assays using mixed-tissue RNA reference samples.

    PubMed

    Pine, P S; Boedigheimer, M; Rosenzweig, B A; Turpaz, Y; He, Y D; Delenstarr, G; Ganter, B; Jarnagin, K; Jones, W D; Reid, L H; Thompson, K L

    2008-11-01

    Effective use of microarray technology in clinical and regulatory settings is contingent on the adoption of standard methods for assessing performance. The MicroArray Quality Control project evaluated the repeatability and comparability of microarray data on the major commercial platforms and laid the groundwork for the application of microarray technology to regulatory assessments. However, methods for assessing performance that are commonly applied to diagnostic assays used in laboratory medicine remain to be developed for microarray assays. A reference system for microarray performance evaluation and process improvement was developed that includes reference samples, metrics and reference datasets. The reference material is composed of two mixes of four different rat tissue RNAs that allow defined target ratios to be assayed using a set of tissue-selective analytes that are distributed along the dynamic range of measurement. The diagnostic accuracy of detected changes in expression ratios, measured as the area under the curve from receiver operating characteristic plots, provides a single commutable value for comparing assay specificity and sensitivity. The utility of this system for assessing overall performance was evaluated for relevant applications like multi-laboratory proficiency testing programs and single-laboratory process drift monitoring. The diagnostic accuracy of detection of a 1.5-fold change in signal level was found to be a sensitive metric for comparing overall performance. This test approaches the technical limit for reliable discrimination of differences between two samples using this technology. We describe a reference system that provides a mechanism for internal and external assessment of laboratory proficiency with microarray technology and is translatable to performance assessments on other whole-genome expression arrays used for basic and clinical research.

  19. Performance Evaluation of Technical Institutions: An Application of Data Envelopment Analysis

    ERIC Educational Resources Information Center

    Debnath, Roma Mitra; Shankar, Ravi; Kumar, Surender

    2008-01-01

    Technical institutions (TIs) are playing an important role in making India a knowledge hub of this century. There is still great diversity in their relative performance, which is a matter of concern to the education planner. This article employs the method of data envelopment analysis (DEA) to compare the relative efficiency of TIs in India. The…

  20. MDA Establishes Effective Metrics for Energy Reduction and Other Environmental Performance Improvements

    DTIC Science & Technology

    2009-05-06

    More Efficient Fuel, Electricity & Water Use (Cont’d.)  Energy and resource conservation campaign: beginning to implement an energy and resource...articles about energy conservation awareness and soliciting employee ideas  Reducing water temperature at MDIOC came from someone reporting the...issue after reading about conservation tips in the newsletter 12 Fuel, Electricity & Water Use Metrics  MDA’s objective is energy use reduction of 3

  1. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  2. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora

    PubMed Central

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2017-01-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes. PMID:28210517

  3. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    PubMed

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  4. Fighter agility metrics, research, and test

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.

    1990-01-01

    Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A completed set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation provided by the NASA Dryden Flight Research Center. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available. Simulation documentation and user instructions are provided in an appendix.

  5. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    NASA Astrophysics Data System (ADS)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  6. Neural decoding with kernel-based metric learning.

    PubMed

    Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C

    2014-06-01

    In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.

  7. Effects of Performance-Based Financial Incentives on Work Performance: A Study of Technical-Level Employees in the Private Sector in Sri Lanka

    ERIC Educational Resources Information Center

    Wickramasinghe, Vathsala; Dabere, Sampath

    2012-01-01

    The objective of the study is to investigate the effect of performance-based financial incentives on work performance. The study hypothesized that the design features of performance-based financial incentive schemes themselves may influence individuals' work performance. For the study, survey methodology was used and 93 technical-level employees…

  8. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  9. Joint learning of labels and distance metric.

    PubMed

    Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng

    2010-06-01

    Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.

  10. Simulator training and non-technical factors improve laparoscopic performance among OBGYN trainees.

    PubMed

    Ahlborg, Liv; Hedman, Leif; Nisell, Henry; Felländer-Tsai, Li; Enochsson, Lars

    2013-10-01

    To investigate how simulator training and non-technical factors affect laparoscopic performance among residents in obstetrics and gynecology. In this prospective study, trainees were randomized into three groups. The first group was allocated to proficiency-based training in the LapSimGyn(®) virtual reality simulator. The second group received additional structured mentorship during subsequent laparoscopies. The third group served as control group. At baseline an operation was performed and visuospatial ability, flow and self-efficacy were assessed. All groups subsequently performed three tubal occlusions. Self-efficacy and flow were assessed before and/or after each operation. Simulator training was conducted at the Center for Advanced Medical Simulation and Training, Karolinska University Hospital. Sterilizations were performed at each trainee's home clinic. Twenty-eight trainees/residents from 21 hospitals in Sweden were included. Visuospatial ability was tested by the Mental Rotation Test-A. Flow and self-efficacy were assessed by validated scales and questionnaires. Laparoscopic performance was measured as the duration of surgery. Visuospatial ability, self-efficacy and flow were correlated to the laparoscopic performance using Spearman's correlations. Differences between groups were analyzed by the Mann-Whitney U-test. No differences across groups were detected at baseline. Self-efficacy scores before and flow scores after the third operation were significantly higher in the trained groups. Duration of surgery was significantly shorter in the trained groups. Flow and self-efficacy correlate positively with laparoscopic performance. Simulator training and non-technical factors appear to improve the laparoscopic performance among trainees/residents in obstetrics and gynecology. © 2013 Nordic Federation of Societies of Obstetrics and Gynecology.

  11. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  12. Analysis of Trinity Power Metrics for Automated Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalenko, Ashley Christine

    This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.

  13. Metrics for Food Distribution.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in food distribution, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  14. Arbitrary Metrics in Psychology

    ERIC Educational Resources Information Center

    Blanton, Hart; Jaccard, James

    2006-01-01

    Many psychological tests have arbitrary metrics but are appropriate for testing psychological theories. Metric arbitrariness is a concern, however, when researchers wish to draw inferences about the true, absolute standing of a group or individual on the latent psychological dimension being measured. The authors illustrate this in the context of 2…

  15. Metrics for Recreation & Tourism.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of recreation and tourism students, this instructional package is one of three for the hospitality and recreation occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  16. Metrics for Food Services.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of food services students, this instructional package is one of three for the hospitality and recreation occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology,…

  17. Metric. Career Education Program.

    ERIC Educational Resources Information Center

    Salem City Schools, NJ.

    This is a compilation of instructional materials to assist teachers and students in learning about the metric system. Contents are organized into four color-coded sections containing the following: (1) background and reference materials for the teacher, including a list of available media and a conversion chart; (2) metric activities for primary…

  18. Implementing the Metric System in Personal and Public Service Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Banks, Wilson P.; And Others

    Addressed to the personal and public service occupations teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric"…

  19. Vehicle Integrated Performance Analysis, the VIPA Experience: Reconnecting with Technical Integration

    NASA Technical Reports Server (NTRS)

    McGhee, David S.

    2005-01-01

    Today's NASA is facing significant challenges and changes. The Exploration initiative indicates a large increase in projects with limited increase in budget. The Columbia report has criticized NASA for its lack of insight and technical integration impacting its ability to provide safety. The Aldridge report is advocating NASA find new ways of doing business. Very early in the Space Launch Initiative (SLI) program a small team of engineers at MSFC were asked to propose a process for performing a system level assessment of a launch vehicle. The request was aimed primarily at providing insight and making NASA a "smart buyer." Out of this effort the VIPA team was created. The difference between the VIPA effort and many integration attempts is that VIPA focuses on using experienced people from various disciplines and a process which focuses them on a technically integrated assessment. Most previous attempts have focused on developing an all encompassing software tool. In addition, VIPA anchored its process formulation in the experience of its members and in early developmental Space Shuttle experience. The primary reference for this is NASA-TP-2001-210092, "Launch Vehicle Design Process: Characterization, Technical Integration, and Lessons Learned," and discussions with its authors. The foundations of VIPA's process are described. The VIPA team also recognized the need to drive detailed analysis earlier in the design process. Analyses and techniques typically done in later design phases, are brought forward using improved computing technology. The intent is to allow the identification of significant sensitivities, trades, and design issues much earlier in the program. This process is driven by the T-model for Technical Integration described in the aforementioned reference. VIPA's approach to performing system level technical integration is discussed in detail. Proposed definitions are offered to clarify this discussion and the general systems integration dialog. VIPA

  20. Metric Education. Interpretive Report No. 1.

    ERIC Educational Resources Information Center

    George Washington Univ., Washington, DC. Inst. for Educational Leadership.

    This report reviews the findings of two projects funded by the National Institute of Education (NIE) ano conducted by the American Institutes for Research (AIR). The project reports, "Going Metric" and "Metric Inservice Teacher Training," document the impact of metric conversion on the educational systems of Great Britain, New…

  1. Metric anisotropies and emergent anisotropic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dash, Ashutosh; Jaiswal, Amaresh

    2018-05-01

    Expansion of a locally equilibrated fluid is considered in an anisotropic space-time given by the Bianchi type-I metric. Starting from the isotropic equilibrium phase-space distribution function in the local rest frame, we obtain expressions for components of the energy-momentum tensor and conserved current, such as number density, energy density, and pressure components. In the case of an axissymmetric Bianchi type-I metric, we show that they are identical to those obtained within the setup of anisotropic hydrodynamics. We further consider the case in which the Bianchi type-I metric is a vacuum solution of the Einstein equation: the Kasner metric. For the axissymmetric Kasner metric, we discuss the implications of our results in the context of anisotropic hydrodynamics.

  2. An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN

    PubMed Central

    Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379

  3. An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.

    PubMed

    Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.

  4. Numerical Calabi-Yau metrics

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René

    2008-03-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.

  5. Metrics for Nurses Aides.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in becoming nurses aides, this instructional package is one of five for the health occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology,…

  6. Evaluation Metrics for the Paragon XP/S-15

    NASA Technical Reports Server (NTRS)

    Traversat, Bernard; McNab, David; Nitzberg, Bill; Fineberg, Sam; Blaylock, Bruce T. (Technical Monitor)

    1993-01-01

    On February 17th 1993, the Numerical Aerodynamic Simulation (NAS) facility located at the NASA Ames Research Center installed a 224 node Intel Paragon XP/S-15 system. After its installation, the Paragon was found to be in a very immature state and was unable to support a NAS users' workload, composed of a wide range of development and production activities. As a first step towards addressing this problem, we implemented a set of metrics to objectively monitor the system as operating system and hardware upgrades were installed. The metrics were designed to measure four aspects of the system that we consider essential to support our workload: availability, utilization, functionality, and performance. This report presents the metrics collected from February 1993 to August 1993. Since its installation, the Paragon availability has improved from a low of 15% uptime to a high of 80%, while its utilization has remained low. Functionality and performance have improved from merely running one of the NAS Parallel Benchmarks to running all of them faster (between 1 and 2 times) than on the iPSC/860. In spite of the progress accomplished, fundamental limitations of the Paragon operating system are restricting the Paragon from supporting the NAS workload. The maximum operating system message passing (NORMA IPC) bandwidth was measured at 11 Mbytes/s, well below the peak hardware bandwidth (175 Mbytes/s), limiting overall virtual memory and Unix services (i.e. Disk and HiPPI I/O) performance. The high NX application message passing latency (184 microns), three times than on the iPSC/860, was found to significantly degrade performance of applications relying on small message sizes. The amount of memory available for an application was found to be approximately 10 Mbytes per node, indicating that the OS is taking more space than anticipated (6 Mbytes per node).

  7. That's How We Roll: The NASA K2 Mission Science Products and Their Performance Metrics

    NASA Astrophysics Data System (ADS)

    Van Cleve, Jeffrey E.; Howell, Steve B.; Smith, Jeffrey C.; Clarke, Bruce D.; Thompson, Susan E.; Bryson, Stephen T.; Lund, Mikkel N.; Handberg, Rasmus; Chaplin, William J.

    2016-07-01

    NASA's exoplanet Discovery mission Kepler was reconstituted as the K2 mission a year after the failure of the second of Kepler's four reaction wheels in 2013 May. Fine control of the spacecraft pointing is now accomplished through the use of the two remaining well-functioning reaction wheels and balancing the pressure of sunlight on the solar panels, which constrains K2 observations to fields in the ecliptic for up to approximately 80 days each. This pseudo-stable mechanism gives typical roll motion in the focal plane of 1.0 pixels peak-to-peak over 6 hr at the edges of the field, two orders of magnitude greater than typical 6 hr pointing errors in the Kepler primary mission. Despite these roll errors, the joint performance of the flight system and its modified science data processing pipeline restores much of the photometric precision of the primary mission while viewing a wide variety of targets, thus turning adversity into diversity. We define K2 performance metrics for data compression and pixel budget available in each campaign; the photometric noise on exoplanet transit and stellar activity timescales; residual correlations in corrected long-cadence light curves; and the protection of test sinusoidal signals from overfitting in the systematic error removal process. We find that data compression and noise both increase linearly with radial distance from the center of the field of view, with the data compression proportional to star count as well. At the center, where roll motion is nearly negligible, the limiting 6 hr photometric precision for a quiet 12th magnitude star can be as low as 30 ppm, only 25% higher than that of Kepler. This noise performance is achieved without sacrificing signal fidelity; test sinusoids injected into the data are attenuated by less than 10% for signals with periods upto 15 days, so that a wide range of stellar rotation and variability signatures are preserved by the K2 pipeline. At timescales relevant to asteroseismology, light

  8. Development of Technology Transfer Economic Growth Metrics

    NASA Technical Reports Server (NTRS)

    Mastrangelo, Christina M.

    1998-01-01

    The primary objective of this project is to determine the feasibility of producing technology transfer metrics that answer the question: Do NASA/MSFC technical assistance activities impact economic growth? The data for this project resides in a 7800-record database maintained by Tec-Masters, Incorporated. The technology assistance data results from survey responses from companies and individuals who have interacted with NASA via a Technology Transfer Agreement, or TTA. The goal of this project was to determine if the existing data could provide indications of increased wealth. This work demonstrates that there is evidence that companies that used NASA technology transfer have a higher job growth rate than the rest of the economy. It also shows that the jobs being supported are jobs in higher wage SIC codes, and this indicates improvements in personal wealth. Finally, this work suggests that with correct data, the wealth issue may be addressed.

  9. Parameter-space metric of semicoherent searches for continuous gravitational waves

    NASA Astrophysics Data System (ADS)

    Pletsch, Holger J.

    2010-08-01

    Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.

  10. Metrics, Lumber, and the Shop Teacher

    ERIC Educational Resources Information Center

    Craemer, Peter J.

    1978-01-01

    As producers of lumber are preparing to convert their output to the metric system, wood shop and building construction teachers must become familiar with the metric measurement language and methods. Manufacturers prefer the "soft conversion" process of changing English to metric units rather than hard conversion, or redimensioning of lumber. Some…

  11. Congress Inches Away from Metric Conversion

    ERIC Educational Resources Information Center

    Russell, Cristine

    1974-01-01

    Reasons are discussed concerning the House of Representatives' defeat in 1974 of a bill to establish a National Metric Conversion Board which would coordinate the process of voluntary conversion to the metric system a ten-year period. A brief history of the metric system in the United States is included. (DT)

  12. Metrication: What Can HRD Specialists Do?

    ERIC Educational Resources Information Center

    Short, Larry G.

    1978-01-01

    First discusses some features of the Metric Conversion Act which established federal support of metric system usage in the United States. Then covers the following: what HRD (Human Resources Development) specialists can do to assist their company managers during the conversion process; metric training strategies; and how to prepare for metric…

  13. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  14. Candidate control design metrics for an agile fighter

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Bailey, Melvin L.; Ostroff, Aaron J.

    1991-01-01

    Success in the fighter combat environment of the future will certainly demand increasing capability from aircraft technology. These advanced capabilities in the form of superagility and supermaneuverability will require special design techniques which translate advanced air combat maneuvering requirements into design criteria. Control design metrics can provide some of these techniques for the control designer. Thus study presents an overview of control design metrics and investigates metrics for advanced fighter agility. The objectives of various metric users, such as airframe designers and pilots, are differentiated from the objectives of the control designer. Using an advanced fighter model, metric values are documented over a portion of the flight envelope through piloted simulation. These metric values provide a baseline against which future control system improvements can be compared and against which a control design methodology can be developed. Agility is measured for axial, pitch, and roll axes. Axial metrics highlight acceleration and deceleration capabilities under different flight loads and include specific excess power measurements to characterize energy meneuverability. Pitch metrics cover both body-axis and wind-axis pitch rates and accelerations. Included in pitch metrics are nose pointing metrics which highlight displacement capability between the nose and the velocity vector. Roll metrics (or torsion metrics) focus on rotational capability about the wind axis.

  15. A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    In this paper, we address the problem of technical analysis information fusion in improving stock market index-level prediction. We present an approach for analyzing stock market price behavior based on different categories of technical analysis metrics and a multiple predictive system. Each category of technical analysis measures is used to characterize stock market price movements. The presented predictive system is based on an ensemble of neural networks (NN) coupled with particle swarm intelligence for parameter optimization where each single neural network is trained with a specific category of technical analysis measures. The experimental evaluation on three international stock market indices and three individual stocks show that the presented ensemble-based technical indicators fusion system significantly improves forecasting accuracy in comparison with single NN. Also, it outperforms the classical neural network trained with index-level lagged values and NN trained with stationary wavelet transform details and approximation coefficients. As a result, technical information fusion in NN ensemble architecture helps improving prediction accuracy.

  16. One network metric datastore to track them all: the OSG network metric service

    NASA Astrophysics Data System (ADS)

    Quick, Robert; Babik, Marian; Fajardo, Edgar M.; Gross, Kyle; Hayashi, Soichi; Krenz, Marina; Lee, Thomas; McKee, Shawn; Pipes, Christopher; Teige, Scott

    2017-10-01

    The Open Science Grid (OSG) relies upon the network as a critical part of the distributed infrastructures it enables. In 2012, OSG added a new focus area in networking with a goal of becoming the primary source of network information for its members and collaborators. This includes gathering, organizing, and providing network metrics to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, and traffic routing. In September of 2015, this service was deployed into the OSG production environment. We will report on the creation, implementation, testing, and deployment of the OSG Networking Service. Starting from organizing the deployment of perfSONAR toolkits within OSG and its partners, to the challenges of orchestrating regular testing between sites, to reliably gathering the resulting network metrics and making them available for users, virtual organizations, and higher level services, all aspects of implementation will be reviewed. In particular, several higher-level services were developed to bring the OSG network service to its full potential. These include a web-based mesh configuration system, which allows central scheduling and management of all the network tests performed by the instances; a set of probes to continually gather metrics from the remote instances and publish it to different sources; a central network datastore (esmond), which provides interfaces to access the network monitoring information in close to real time and historically (up to a year) giving the state of the tests; and a perfSONAR infrastructure monitor system, ensuring the current perfSONAR instances are correctly configured and operating as intended. We will also describe the challenges we encountered in ongoing operations of the network service and how we have evolved our procedures to address those challenges. Finally we will describe our plans for future extensions and improvements to the service.

  17. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  18. Going Metric...PAL (Programmed Assigned Learning).

    ERIC Educational Resources Information Center

    Wallace, Jesse D.

    This 41-page programed booklet is intended for use by students and adults. It introduces the metric units for length, area, volume, and temperature through a series of questions and answers. The advantages of the metric system over the English system are discussed. Conversion factors are introduced and several applications of the metric system in…

  19. Objective measurement of complex multimodal and multidimensional display formats: a common metric for predicting format effectiveness

    NASA Astrophysics Data System (ADS)

    Marshak, William P.; Darkow, David J.; Wesler, Mary M.; Fix, Edward L.

    2000-08-01

    Computer-based display designers have more sensory modes and more dimensions within sensory modality with which to encode information in a user interface than ever before. This elaboration of information presentation has made measurement of display/format effectiveness and predicting display/format performance extremely difficult. A multivariate method has been devised which isolates critical information, physically measures its signal strength, and compares it with other elements of the display, which act like background noise. This common Metric relates signal-to-noise ratios (SNRs) within each stimulus dimension, then combines SNRs among display modes, dimensions and cognitive factors can predict display format effectiveness. Examples with their Common Metric assessment and validation in performance will be presented along with the derivation of the metric. Implications of the Common Metric in display design and evaluation will be discussed.

  20. On general (α,β)-metrics of Landsberg type

    NASA Astrophysics Data System (ADS)

    Zohrehvand, M.; Maleki, H.

    2016-05-01

    In this paper, we study a class of Finsler metrics, which are defined by a Riemannian metric α and a one-form β. They are called general (α,β)-metrics. We have proven that, every Landsberg general (α,β)-metric is a Berwald metric, under a certain condition. This shows that the hunting for an unicorn, one of the longest standing open problem in Finsler geometry, cannot be successful in the class of general (α,β)-metrics.

  1. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  2. 32 CFR 37.895 - How is the final performance report to be sent to the Defense Technical Information Center?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... How is the final performance report to be sent to the Defense Technical Information Center? (a... 32 National Defense 1 2014-07-01 2014-07-01 false How is the final performance report to be sent to the Defense Technical Information Center? 37.895 Section 37.895 National Defense Department of...

  3. 32 CFR 37.895 - How is the final performance report to be sent to the Defense Technical Information Center?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... How is the final performance report to be sent to the Defense Technical Information Center? (a... 32 National Defense 1 2011-07-01 2011-07-01 false How is the final performance report to be sent to the Defense Technical Information Center? 37.895 Section 37.895 National Defense Department of...

  4. 32 CFR 37.895 - How is the final performance report to be sent to the Defense Technical Information Center?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... How is the final performance report to be sent to the Defense Technical Information Center? (a... 32 National Defense 1 2013-07-01 2013-07-01 false How is the final performance report to be sent to the Defense Technical Information Center? 37.895 Section 37.895 National Defense Department of...

  5. 32 CFR 37.895 - How is the final performance report to be sent to the Defense Technical Information Center?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false How is the final performance report to be sent to the Defense Technical Information Center? 37.895 Section 37.895 National Defense Department of... How is the final performance report to be sent to the Defense Technical Information Center? (a...

  6. Development of NASA Technical Standards Program Relative to Enhancing Engineering Capabilities

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, William W.

    2003-01-01

    The enhancement of engineering capabilities is an important aspect of any organization; especially those engaged in aerospace development activities. Technical Standards are one of the key elements of this endeavor. The NASA Technical Standards Program was formed in 1997 in response to the NASA Administrator s directive to develop an Agencywide Technical Standards Program. The Program s principal objective involved the converting Center-unique technical standards into Agency wide standards and the adoption/endorsement of non-Government technical standards in lieu of government standards. In the process of these actions, the potential for further enhancement of the Agency s engineering capabilities was noted relative to value of being able to access Agencywide the necessary full-text technical standards, standards update notifications, and integration of lessons learned with technical standards, all available to the user from one Website. This was accomplished and is now being enhanced based on feedbacks from the Agency's engineering staff and supporting contractors. This paper addresses the development experiences with the NASA Technical Standards Program and the enhancement of the Agency's engineering capabilities provided by the Program s products. Metrics are provided on significant aspects of the Program.

  7. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  8. Simulation fails to replicate stress in trainees performing a technical procedure in the clinical environment.

    PubMed

    Baker, B G; Bhalla, A; Doleman, B; Yarnold, E; Simons, S; Lund, J N; Williams, J P

    2017-01-01

    Simulation-based training (SBT) has become an increasingly important method by which doctors learn. Stress has an impact upon learning, performance, technical, and non-technical skills. However, there are currently no studies that compare stress in the clinical and simulated environment. We aimed to compare objective (heart rate variability, HRV) and subjective (state trait anxiety inventory, STAI) measures of stress theatre with a simulated environment. HRV recordings were obtained from eight anesthetic trainees performing an uncomplicated rapid sequence induction at pre-determined procedural steps using a wireless Polar RS800CX monitor © in an emergency theatre setting. This was repeated in the simulated environment. Participants completed an STAI before and after the procedure. Eight trainees completed the study. The theatre environment caused an increase in objective stress vs baseline (p = .004). There was no significant difference between average objective stress levels across all time points (p = .20) between environments. However, there was a significant interaction between the variables of objective stress and environment (p = .045). There was no significant difference in subjective stress (p = .27) between environments. Simulation was unable to accurately replicate the stress of the technical procedure. This is the first study that compares the stress during SBT with the theatre environment and has implications for the assessment of simulated environments for use in examinations, rating of technical and non-technical skills, and stress management training.

  9. Defining quality metrics and improving safety and outcome in allergy care.

    PubMed

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  10. Characterization of Days Based On Analysis of National Airspace System Performance Metrics

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.; Musaffar, Bassam; Meyn, Larry A.; Quon, Leighton K.

    2006-01-01

    Days of operations in the National Airspace System can be described in term of traffic demand, runway conditions, equipment outages, and surface and enroute weather conditions. These causes manifest themselves in terms of departure delays, arrival delays, enroute delays and traffic flow management delays, Traffic flow management initiatives such as, ground stops, ground delay programs, miles-in-trail restrictions, rerouting and airborne holding are imposed to balance the air traffic demand with respect to the available capacity, In order to maintain operational efficiency of the National Airspace System, the Federal Aviation Administration (FAA) maintains delay sad other statistics in the Air Traffic Operations Network (OPSNET) and the Aviation System Performance Metrics (ASPM) databases. OPSNET data includes reportable delays of fifteen minutes ox more experienced by Instrument Flight Rule (IFR) flights. Numbers of aircraft affected by departure delays, enroute delays, arrival delays and traffic flow delays are recorded in the OPSNET data. ASPM data consist of number of actual departures, number of canceled departures, percentage of on time departures, percentage of on time gate arrivals, taxi-out delays. taxi-in delays, gate delays, arrival delays and block delays. Surface conditions at the major U.S. airports are classified in terms of Instrument Meteorological Condition (IMC) and Visual Meteorological Condition (VMC) as a function of the time of the day in the ASPM data. The main objective of this paper is to use OPSNET and ASPM data to classify the days in the datasets into few distinct groups, where each group is separated from the other groups in terms of a distance metric. The motivations for classifying the days are two-fold, 1) to enable selection of days of traffic with particular operational characteristics for concept evaluation using system-wide simulation systems such as the National Aeronautics and Space Administration's Airspace Concepts Evaluation

  11. Operating room metrics score card-creating a prototype for individualized feedback.

    PubMed

    Gabriel, Rodney A; Gimlich, Robert; Ehrenfeld, Jesse M; Urman, Richard D

    2014-11-01

    The balance between reducing costs and inefficiencies with that of patient safety is a challenging problem faced in the operating room suite. An ongoing challenge is the creation of effective strategies that reduce these inefficiencies and provide real-time personalized metrics and electronic feedback to anesthesia practitioners. We created a sample report card structure, utilizing existing informatics systems. This system allows to gather and analyze operating room metrics for each anesthesia provider and offer personalized feedback. To accomplish this task, we identified key metrics that represented time and quality parameters. We collected these data for individual anesthesiologists and compared performance to the overall group average. Data were presented as an electronic score card and made available to individual clinicians on a real-time basis in an effort to provide effective feedback. These metrics included number of cancelled cases, average turnover time, average time to operating room ready and patient in room, number of delayed first case starts, average induction time, average extubation time, average time to recovery room arrival to discharge, performance feedback from other providers, compliance to various protocols, and total anesthetic costs. The concept we propose can easily be generalized to a variety of operating room settings, types of facilities and OR health care professionals. Such a scorecard can be created using content that is important for operating room efficiency, research, and practice improvement for anesthesia providers.

  12. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    NASA Astrophysics Data System (ADS)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite

  13. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner

  14. A Comparison of Physical and Technical Performance Profiles Between Successful and Less-Successful Professional Rugby League Teams.

    PubMed

    Kempton, Thomas; Sirotic, Anita C; Coutts, Aaron J

    2017-04-01

    To examine differences in physical and technical performance profiles using a large sample of match observations drawn from successful and less-successful professional rugby league teams. Match activity profiles were collected using global positioning satellite (GPS) technology from 29 players from a successful rugby league team during 24 games and 25 players from a less-successful team during 18 games throughout 2 separate competition seasons. Technical performance data were obtained from a commercial statistics provider. A progressive magnitude-based statistical approach was used to compare differences in physical and technical performance variables between the reference teams. There were no clear differences in playing time, absolute and relative total distances, or low-speed running distances between successful and less-successful teams. The successful team possibly to very likely had lower higher-speed running demands and likely had fewer physical collisions than the less-successful team, although they likely to most likely demonstrated more accelerations and decelerations and likely had higher average metabolic power. The successful team very likely gained more territory in attack, very likely had more possessions, and likely committed fewer errors. In contrast, the less-successful team was likely required to attempt more tackles, most likely missed more tackles, and very likely had a lower effective tackle percentage. In the current study, successful match performance was not contingent on higher match running outputs or more physical collisions; rather, proficiency in technical performance components better differentiated successful and less-successful teams.

  15. A scalable kernel-based semisupervised metric learning algorithm with out-of-sample generalization ability.

    PubMed

    Yeung, Dit-Yan; Chang, Hong; Dai, Guang

    2008-11-01

    In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.

  16. Variational and robust density fitting of four-center two-electron integrals in local metrics

    NASA Astrophysics Data System (ADS)

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł

    2008-09-01

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  17. Variational and robust density fitting of four-center two-electron integrals in local metrics.

    PubMed

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł

    2008-09-14

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  18. Boundary term in metric f ( R) gravity: field equations in the metric formalism

    NASA Astrophysics Data System (ADS)

    Guarnizo, Alejandro; Castañeda, Leonardo; Tejeiro, Juan M.

    2010-11-01

    The main goal of this paper is to get in a straightforward form the field equations in metric f ( R) gravity, using elementary variational principles and adding a boundary term in the action, instead of the usual treatment in an equivalent scalar-tensor approach. We start with a brief review of the Einstein-Hilbert action, together with the Gibbons-York-Hawking boundary term, which is mentioned in some literature, but is generally missing. Next we present in detail the field equations in metric f ( R) gravity, including the discussion about boundaries, and we compare with the Gibbons-York-Hawking term in General Relativity. We notice that this boundary term is necessary in order to have a well defined extremal action principle under metric variation.

  19. Pulsed Lidar Performance/Technical Maturity Assessment

    NASA Technical Reports Server (NTRS)

    Gimmestad, Gary G.; West, Leanne L.; Wood, Jack W.; Frehlich, Rod

    2004-01-01

    This report describes the results of investigations performed by the Georgia Tech Research Institute (GTRI) and the National Center for Atmospheric Research (NCAR) under a task entitled 'Pulsed Lidar Performance/Technical Maturity Assessment' funded by the Crew Systems Branch of the Airborne Systems Competency at the NASA Langley Research Center. The investigations included two tasks, 1.1(a) and 1.1(b). The Tasks discussed in this report are in support of the NASA Virtual Airspace Modeling and Simulation (VAMS) program and are designed to evaluate a pulsed lidar that will be required for active wake vortex avoidance solutions. The Coherent Technologies, Inc. (CTI) WindTracer LIDAR is an eye-safe, 2-micron, coherent, pulsed Doppler lidar with wake tracking capability. The actual performance of the WindTracer system was to be quantified. In addition, the sensor performance has been assessed and modeled, and the models have been included in simulation efforts. The WindTracer LIDAR was purchased by the Federal Aviation Administration (FAA) for use in near-term field data collection efforts as part of a joint NASA/FAA wake vortex research program. In the joint research program, a minimum common wake and weather data collection platform will be defined. NASA Langley will use the field data to support wake model development and operational concept investigation in support of the VAMS project, where the ultimate goal is to improve airport capacity and safety. Task 1.1(a), performed by NCAR in Boulder, Colorado to analyze the lidar system to determine its performance and capabilities based on results from simulated lidar data with analytic wake vortex models provided by NASA, which were then compared to the vendor's claims for the operational specifications of the lidar. Task 1.1(a) is described in Section 3, including the vortex model, lidar parameters and simulations, and results for both detection and tracking of wake vortices generated by Boeing 737s and 747s. Task 1

  20. Traveler oriented traffic performance metrics using real time traffic data from the Midtown-in-Motion (MIM) project in Manhattan, NY.

    DOT National Transportation Integrated Search

    2013-10-01

    In a congested urban street network the average traffic speed is an inadequate metric for measuring : speed changes that drivers can perceive from changes in traffic control strategies. : A driver oriented metric is needed. Stop frequency distrib...