Sample records for engineering performance metrics

  1. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  2. Best Practices Handbook: Traffic Engineering in Range Networks

    DTIC Science & Technology

    2016-03-01

    units of measurement. Measurement Methodology - A repeatable measurement technique used to derive one or more metrics of interest . Network...Performance measures - Metrics that provide quantitative or qualitative measures of the performance of systems or subsystems of interest . Performance Metric

  3. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  4. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  5. File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering

    DTIC Science & Technology

    2013-03-21

    33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change

  6. Development and Application of an Integrated Approach toward NASA Airspace Systems Research

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Fong, Robert K.; Abramson, Paul D.; Koenke, Ed

    2008-01-01

    The National Aeronautics and Space Administration's (NASA) Airspace Systems Program is contributing air traffic management research in support of the 2025 Next Generation Air Transportation System (NextGen). Contributions support research and development needs provided by the interagency Joint Planning and Development Office (JPDO). These needs generally call for integrated technical solutions that improve system-level performance and work effectively across multiple domains and planning time horizons. In response, the Airspace Systems Program is pursuing an integrated research approach and has adapted systems engineering best practices for application in a research environment. Systems engineering methods aim to enable researchers to methodically compare different technical approaches, consider system-level performance, and develop compatible solutions. Systems engineering activities are performed iteratively as the research matures. Products of this approach include a demand and needs analysis, system-level descriptions focusing on NASA research contributions, system assessment and design studies, and common systemlevel metrics, scenarios, and assumptions. Results from the first systems engineering iteration include a preliminary demand and needs analysis; a functional modeling tool; and initial system-level metrics, scenario characteristics, and assumptions. Demand and needs analysis results suggest that several advanced concepts can mitigate demand/capacity imbalances for NextGen, but fall short of enabling three-times current-day capacity at the nation s busiest airports and airspace. Current activities are focusing on standardizing metrics, scenarios, and assumptions, conducting system-level performance assessments of integrated research solutions, and exploring key system design interfaces.

  7. Shipboard Electrical System Modeling for Early-Stage Design Space Exploration

    DTIC Science & Technology

    2013-04-01

    method is demonstrated in several system studies. I. INTRODUCTION The integrated engineering plant ( IEP ) of an electric warship can be viewed as a...which it must operate [2], [4]. The desired IEP design should be dependable [5]. The operability metric has previously been defined as a measure of...the performance of an IEP during a specific scenario [2]. Dependability metrics have been derived from the operability metric as measures of the IEP

  8. Propulsion Technology Lifecycle Operational Analysis

    NASA Technical Reports Server (NTRS)

    Robinson, John W.; Rhodes, Russell E.

    2010-01-01

    The paper presents the results of a focused effort performed by the members of the Space Propulsion Synergy Team (SPST) Functional Requirements Sub-team to develop propulsion data to support Advanced Technology Lifecycle Analysis System (ATLAS). This is a spreadsheet application to analyze the impact of technology decisions at a system-of-systems level. Results are summarized in an Excel workbook we call the Technology Tool Box (TTB). The TTB provides data for technology performance, operations, and programmatic parameters in the form of a library of technical information to support analysis tools and/or models. The lifecycle of technologies can be analyzed from this data and particularly useful for system operations involving long running missions. The propulsion technologies in this paper are listed against Chemical Rocket Engines in a Work Breakdown Structure (WBS) format. The overall effort involved establishing four elements: (1) A general purpose Functional System Breakdown Structure (FSBS). (2) Operational Requirements for Rocket Engines. (3) Technology Metric Values associated with Operating Systems (4) Work Breakdown Structure (WBS) of Chemical Rocket Engines The list of Chemical Rocket Engines identified in the WBS is by no means complete. It is planned to update the TTB with a more complete list of available Chemical Rocket Engines for United States (US) engines and add the Foreign rocket engines to the WBS which are available to NASA and the Aerospace Industry. The Operational Technology Metric Values were derived by the SPST Sub-team in the form of the TTB and establishes a database for users to help evaluate and establish the technology level of each Chemical Rocket Engine in the database. The Technology Metric Values will serve as a guide to help determine which rocket engine to invest technology money in for future development.

  9. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  10. ACCESS - A Science and Engineering Assessment of Space Coronagraph Concepts for the Direct Imaging and Spectroscopy of Exoplanetary Systems

    NASA Technical Reports Server (NTRS)

    Trauger, John

    2008-01-01

    Topics include and overview, science objectives, study objectives, coronagraph types, metrics, ACCESS observatory, laboratory validations, and summary. Individual slides examine ACCESS engineering approach, ACCESS gamut of coronagraph types, coronagraph metrics, ACCESS Discovery Space, coronagraph optical layout, wavefront control on the "level playing field", deformable mirror development for HCIT, laboratory testbed demonstrations, high contract imaging with the HCIT, laboratory coronagraph contrast and stability, model validation and performance predictions, HCIT coronagraph optical layout, Lyot coronagraph on the HCIT, pupil mapping (PIAA), shaped pupils, and vortex phase mask experiments on the HCIT.

  11. Evaluating software development characteristics: Assessment of software measures in the Software Engineering Laboratory. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1981-01-01

    Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.

  12. Driving photomask supplier quality through automation

    NASA Astrophysics Data System (ADS)

    Russell, Drew; Espenscheid, Andrew

    2007-10-01

    In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.

  13. Development of a turbojet engine gearbox test rig for prognostics and health management

    NASA Astrophysics Data System (ADS)

    Rezaei, Aida; Dadouche, Azzedine

    2012-11-01

    Aircraft engine gearboxes represent one of the many critical systems/elements that require special attention for longer and safer operation. Reactive maintenance strategies are unsuitable as they usually imply higher repair costs when compared to condition based maintenance. This paper discusses the main prognostics and health management (PHM) approaches, describes a newly designed gearbox experimental facility and analyses preliminary data for gear prognosis. The test rig is designed to provide full capabilities of performing controlled experiments suitable for developing a reliable diagnostic and prognostic system. The rig is based on the accessory gearbox of the GE J85 turbojet engine, which has been slightly modified and reconfigured to replicate real operating conditions such as speeds and loads. Defect to failure tests (DTFT) have been run to evaluate the performance of the rig as well as to assess prognostic metrics extracted from sensors installed on the gearbox casing (vibration and acoustic). The paper also details the main components of the rig and describes the various challenges encountered. Successful DTFT results were obtained during an idle engine performance test and prognostic metrics associated with the sensor suite were evaluated and discussed.

  14. Flow Control Opportunities for Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Cutley, Dennis E.

    2008-01-01

    The advancement of technology in gas turbine engines used for aerospace propulsion has been focused on achieving significant performance improvements. At the system level, these improvements are expressed in metrics such as engine thrust-to-weight ratio and system and component efficiencies. The overall goals are directed at reducing engine weight, fuel burn, emissions, and noise. At a component level, these goals translate into aggressive designs of each engine component well beyond the state of the art.

  15. Cavity Coupled Aeroramp Injector Combustion Study

    DTIC Science & Technology

    2009-08-01

    Lin 5 Taitech Inc., Beavercreek, OH, 45430 The difficulties with fueling of supersonic combustion ramjet engines with hydrocarbon based fuels...combustor to not force the pre- combustion shock train out of the isolator and, in a full engine with inlet, cause an inlet unstart and likely...metric used to quantify engine performance is the combustion efficiency. Figure 9 shows the comparison of the combustion efficiency as a function of

  16. Graphical CONOPS Prototype to Demonstrate Emerging Methods, Processes, and Tools at ARDEC

    DTIC Science & Technology

    2013-07-17

    Concept Engineering Framework (ICEF), an extensive literature review was conducted to discover metrics that exist for evaluating concept engineering...language to ICEF to SysML ................................................ 34 Table 5 Artifact metrics ...50 Table 6 Collaboration metrics

  17. Parametric Cost Analysis: A Design Function

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1989-01-01

    Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.

  18. Opportunities for High-Value Bioblendstocks to Enable Advanced Light- and Heavy-Duty Engines: Insights from the Co-Optima Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, John T

    Co-Optima research and analysis have identified fuel properties that enable advanced light-duty and heavy-duty engines. There are a large number of blendstocks readily derived from biomass that possess beneficial properties. Key research needs have been identified for performance, technology, economic, and environmental metrics.

  19. Operational modes, health, and status monitoring

    NASA Astrophysics Data System (ADS)

    Taljaard, Corrie

    2016-08-01

    System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.

  20. Automating Software Design Metrics.

    DTIC Science & Technology

    1984-02-01

    INTRODUCTION 1 ", ... 0..1 1.2 HISTORICAL PERSPECTIVE High quality software is of interest to both the software engineering com- munity and its users. As...contributions of many other software engineering efforts, most notably [MCC 77] and [Boe 83b], which have defined and refined a framework for quantifying...AUTOMATION OF DESIGN METRICS Software metrics can be useful within the context of an integrated soft- ware engineering environment. The purpose of this

  1. I/O Performance Characterization of Lustre and NASA Applications on Pleiades

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush

    2012-01-01

    In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.

  2. 24th Annual Logistics Conference and Exhibition

    DTIC Science & Technology

    2008-03-13

    Mission / Financial • Beliefs • Organizational Cultures Management and Control Systems • Agency Mission Statements • Process Metrics/Key Performance... Juan Arcocha, USA, Deputy Director for Logistics and Engineering, USNORTHCOM MG John Basilica, Jr., ARNG, Director of Logistics, J4, National Guard...Bureau Panelists:  COL Juan Arcocha, USA, Deputy Director for Logistics and Engineering, USNORTHCOM  MG John Basilica, Jr., ARNG, Director of

  3. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  4. Metrics for Small Engine Repair.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of small engine repair students, this instructional package is one of four for the transportation occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology,…

  5. Feeling lucky? Using search engines to assess perceptions of urban sustainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keirstead, James

    2009-02-15

    The sustainability of urban environments is an important issue at both local and international scales. Indicators are frequently used by decision-makers seeking to improve urban performance but these metrics can be dependent on sparse quantitative data. This paper explores the potential of an alternative approach, using an internet search engine to quickly gather qualitative data on the key attributes of cities. The method is applied to 21 world cities and the results indicate that, while the technique does shed light on direct and indirect aspects of sustainability, the validity of derived metrics as objective indicators of long-term sustainability is questionable.more » However the method's ability to provide subjective short-term assessments is more promising and it could therefore play an important role in participatory policy exercises such as public consultations. A number of promising technical improvements to the method's performance are also highlighted.« less

  6. Approaches to Cycle Analysis and Performance Metrics

    NASA Technical Reports Server (NTRS)

    Parson, Daniel E.

    2003-01-01

    The following notes were prepared as part of an American Institute of Aeronautics and Astronautics (AIAA) sponsored short course entitled Air Breathing Pulse Detonation Engine (PDE) Technology. The course was presented in January of 2003, and again in July of 2004 at two different AIAA meetings. It was taught by seven instructors, each of whom provided information on particular areas of PDE research. These notes cover two areas. The first is titled Approaches to Cycle Analysis and Performance Metrics. Here, the various methods of cycle analysis are introduced. These range from algebraic, thermodynamic equations, to single and multi-dimensional Computational Fluid Dynamic (CFD) solutions. Also discussed are the various means by which performance is measured, and how these are applied in a device which is fundamentally unsteady. The second topic covered is titled PDE Hybrid Applications. Here the concept of coupling a PDE to a conventional turbomachinery based engine is explored. Motivation for such a configuration is provided in the form of potential thermodynamic benefits. This is accompanied by a discussion of challenges to the technology.

  7. A Correlation Between Quality Management Metrics and Technical Performance Measurement

    DTIC Science & Technology

    2007-03-01

    Engineering Working Group SME Subject Matter Expert SoS System of Systems SPI Schedule performance Index SSEI System of Systems Engineering and...and stated as such [Q, M , M &G]. The QMM equation is given by: 12 QMM=0.92RQM+0.67EPM+0.55RKM+1.86PM, where: RGM is the requirements management...schedule. Now if corrective action is not taken, the project/task will be completed behind schedule and over budget. m . As well as the derived

  8. Improving Space Project Cost Estimating with Engineering Management Variables

    NASA Technical Reports Server (NTRS)

    Hamaker, Joseph W.; Roth, Axel (Technical Monitor)

    2001-01-01

    Current space project cost models attempt to predict space flight project cost via regression equations, which relate the cost of projects to technical performance metrics (e.g. weight, thrust, power, pointing accuracy, etc.). This paper examines the introduction of engineering management parameters to the set of explanatory variables. A number of specific engineering management variables are considered and exploratory regression analysis is performed to determine if there is statistical evidence for cost effects apart from technical aspects of the projects. It is concluded that there are other non-technical effects at work and that further research is warranted to determine if it can be shown that these cost effects are definitely related to engineering management.

  9. Automated and comprehensive link engineering supporting branched, ring, and mesh network topologies

    NASA Astrophysics Data System (ADS)

    Farina, J.; Khomchenko, D.; Yevseyenko, D.; Meester, J.; Richter, A.

    2016-02-01

    Link design, while relatively easy in the past, can become quite cumbersome with complex channel plans and equipment configurations. The task of designing optical transport systems and selecting equipment is often performed by an applications or sales engineer using simple tools, such as custom Excel spreadsheets. Eventually, every individual has their own version of the spreadsheet as well as their own methodology for building the network. This approach becomes unmanageable very quickly and leads to mistakes, bending of the engineering rules and installations that do not perform as expected. We demonstrate a comprehensive planning environment, which offers an efficient approach to unify, control and expedite the design process by controlling libraries of equipment and engineering methodologies, automating the process and providing the analysis tools necessary to predict system performance throughout the system and for all channels. In addition to the placement of EDFAs and DCEs, performance analysis metrics are provided at every step of the way. Metrics that can be tracked include power, CD and OSNR, SPM, XPM, FWM and SBS. Automated routine steps assist in design aspects such as equalization, padding and gain setting for EDFAs, the placement of ROADMs and transceivers, and creating regeneration points. DWDM networks consisting of a large number of nodes and repeater huts, interconnected in linear, branched, mesh and ring network topologies, can be designed much faster when compared with conventional design methods. Using flexible templates for all major optical components, our technology-agnostic planning approach supports the constant advances in optical communications.

  10. Complex systems in metabolic engineering.

    PubMed

    Winkler, James D; Erickson, Keesha; Choudhury, Alaksh; Halweg-Edwards, Andrea L; Gill, Ryan T

    2015-12-01

    Metabolic engineers manipulate intricate biological networks to build efficient biological machines. The inherent complexity of this task, derived from the extensive and often unknown interconnectivity between and within these networks, often prevents researchers from achieving desired performance. Other fields have developed methods to tackle the issue of complexity for their unique subset of engineering problems, but to date, there has not been extensive and comprehensive examination of how metabolic engineers use existing tools to ameliorate this effect on their own research projects. In this review, we examine how complexity affects engineering at the protein, pathway, and genome levels within an organism, and the tools for handling these issues to achieve high-performing strain designs. Quantitative complexity metrics and their applications to metabolic engineering versus traditional engineering fields are also discussed. We conclude by predicting how metabolic engineering practices may advance in light of an explicit consideration of design complexity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Foresters' Metric Conversions program (version 1.0). [Computer program

    Treesearch

    Jefferson A. Palmer

    1999-01-01

    The conversion of scientific measurements has become commonplace in the fields of - engineering, research, and forestry. Foresters? Metric Conversions is a Windows-based computer program that quickly converts user-defined measurements from English to metric and from metric to English. Foresters? Metric Conversions was derived from the publication "Metric...

  12. Applying the Goal-Question-Indicator-Metric (GQIM) Method to Perform Military Situational Analysis

    DTIC Science & Technology

    2016-05-11

    www.sei.cmu.edu CMU/SEI-2016-TN-003 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY Distribution Statement A: Approved for Public Release...Distribution is Unlimited Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported by the Department of...Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally

  13. A novel critical infrastructure resilience assessment approach using dynamic Bayesian networks

    NASA Astrophysics Data System (ADS)

    Cai, Baoping; Xie, Min; Liu, Yonghong; Liu, Yiliu; Ji, Renjie; Feng, Qiang

    2017-10-01

    The word resilience originally originates from the Latin word "resiliere", which means to "bounce back". The concept has been used in various fields, such as ecology, economics, psychology, and society, with different definitions. In the field of critical infrastructure, although some resilience metrics are proposed, they are totally different from each other, which are determined by the performances of the objects of evaluation. Here we bridge the gap by developing a universal critical infrastructure resilience metric from the perspective of reliability engineering. A dynamic Bayesian networks-based assessment approach is proposed to calculate the resilience value. A series, parallel and voting system is used to demonstrate the application of the developed resilience metric and assessment approach.

  14. The Case for Distributed Engine Control in Turbo-Shaft Engine Systems

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Paluszewski, Paul J.; Storey, William; Smith, Bert J.

    2009-01-01

    The turbo-shaft engine is an important propulsion system used to power vehicles on land, sea, and in the air. As the power plant for many high performance helicopters, the characteristics of the engine and control are critical to proper vehicle operation as well as being the main determinant to overall vehicle performance. When applied to vertical flight, important distinctions exist in the turbo-shaft engine control system due to the high degree of dynamic coupling between the engine and airframe and the affect on vehicle handling characteristics. In this study, the impact of engine control system architecture is explored relative to engine performance, weight, reliability, safety, and overall cost. Comparison of the impact of architecture on these metrics is investigated as the control system is modified from a legacy centralized structure to a more distributed configuration. A composite strawman system which is typical of turbo-shaft engines in the 1000 to 2000 hp class is described and used for comparison. The overall benefits of these changes to control system architecture are assessed. The availability of supporting technologies to achieve this evolution is also discussed.

  15. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  16. There is No Free Lunch: Tradeoffs in the Utility of Learned Knowledge

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; McKusick, Kathleen B.

    1992-01-01

    With the recent introduction of learning in integrated systems, there is a need to measure the utility of learned knowledge for these more complex systems. A difficulty arrises when there are multiple, possibly conflicting, utility metrics to be measured. In this paper, we present schemes which trade off conflicting utility metrics in order to achieve some global performance objectives. In particular, we present a case study of a multi-strategy machine learning system, mutual theory refinement, which refines world models for an integrated reactive system, the Entropy Reduction Engine. We provide experimental results on the utility of learned knowledge in two conflicting metrics - improved accuracy and degraded efficiency. We then demonstrate two ways to trade off these metrics. In each, some learned knowledge is either approximated or dynamically 'forgotten' so as to improve efficiency while degrading accuracy only slightly.

  17. A Framework for Orbital Performance Evaluation in Distributed Space Missions for Earth Observation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Miller, David W.; de Weck, Olivier

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth science missions owing to their unique ability to increase observation sampling in spatial, spectral and temporal dimensions simultaneously. DSM architectures have a large number of design variables and since they are expected to increase mission flexibility, scalability, evolvability and robustness, their design is a complex problem with many variables and objectives affecting performance. There are very few open-access tools available to explore the tradespace of variables which allow performance assessment and are easy to plug into science goals, and therefore select the most optimal design. This paper presents a software tool developed on the MATLAB engine interfacing with STK, for DSM orbit design and selection. It is capable of generating thousands of homogeneous constellation or formation flight architectures based on pre-defined design variable ranges and sizing those architectures in terms of predefined performance metrics. The metrics can be input into observing system simulation experiments, as available from the science teams, allowing dynamic coupling of science and engineering designs. Design variables include but are not restricted to constellation type, formation flight type, FOV of instrument, altitude and inclination of chief orbits, differential orbital elements, leader satellites, latitudes or regions of interest, planes and satellite numbers. Intermediate performance metrics include angular coverage, number of accesses, revisit coverage, access deterioration over time at every point of the Earth's grid. The orbit design process can be streamlined and variables more bounded along the way, owing to the availability of low fidelity and low complexity models such as corrected HCW equations up to high precision STK models with J2 and drag. The tool can thus help any scientist or program manager select pre-Phase A, Pareto optimal DSM designs for a variety of science goals without having to delve into the details of the engineering design process.

  18. Stakeholder requirements for commercially successful wave energy converter farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babarit, Aurélien; Bull, Diana; Dykes, Katherine

    2017-12-01

    In this study, systems engineering techniques are applied to wave energy to identify and specify stakeholders' requirements for a commercially successful wave energy farm. The focus is on the continental scale utility market. Lifecycle stages and stakeholders are identified. Stakeholders' needs across the whole lifecycle of the wave energy farm are analyzed. A list of 33 stakeholder requirements are identified and specified. This list of requirements should serve as components of a technology performance level metric that could be used by investors and funding agencies to make informed decisions when allocating resources. It is hoped that the technology performance levelmore » metric will accelerate wave energy conversion technology convergence.« less

  19. Engineering pollinator phenotypes: consequences of induced size variation on adult morphology and flight performance metrics in the solitary bee, Osmia lignaria

    USDA-ARS?s Scientific Manuscript database

    Body size is an important trait because it strongly correlates with morphology, performance, and fitness. In insects, the body size model argues that adult size is determined during the larval stage by the mechanisms regulating growth rate and the duration of growth. Though explicit links have been ...

  20. Metrics in method engineering

    NASA Astrophysics Data System (ADS)

    Brinkkemper, S.; Rossi, M.

    1994-12-01

    As customizable computer aided software engineering (CASE) tools, or CASE shells, have been introduced in academia and industry, there has been a growing interest into the systematic construction of methods and their support environments, i.e. method engineering. To aid the method developers and method selectors in their tasks, we propose two sets of metrics, which measure the complexity of diagrammatic specification techniques on the one hand, and of complete systems development methods on the other hand. Proposed metrics provide a relatively fast and simple way to analyze the technique (or method) properties, and when accompanied with other selection criteria, can be used for estimating the cost of learning the technique and the relative complexity of a technique compared to others. To demonstrate the applicability of the proposed metrics, we have applied them to 34 techniques and 15 methods.

  1. 23 CFR 655.601 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS TRAFFIC OPERATIONS... is also available from the FHWA Office of Operations Web site at: http//mutcd.fhwa.dot.gov. (b) Guide... 20001. (c) Traffic Engineering Metric Conversion Factors, 1993—Addendum to the Guide to Metric...

  2. 23 CFR 655.601 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS TRAFFIC OPERATIONS... is also available from the FHWA Office of Operations Web site at: http//mutcd.fhwa.dot.gov. (b) Guide... 20001. (c) Traffic Engineering Metric Conversion Factors, 1993—Addendum to the Guide to Metric...

  3. Physicochemical comparison of commercially available metal oxide nanoparticles: implications for engineered nanoparticle toxicology and risk assessment

    EPA Science Inventory

    Accurate and affordable physicochemical characterization of commercial engineered nanomaterials is required for toxicology studies to ultimately determine nanomaterial: hazard identification; dose to response metric(s); and mechanism(s) of injury. A minimal physical and chemica...

  4. Determination of geographic variance in stroke prevalence using Internet search engine analytics.

    PubMed

    Walcott, Brian P; Nahed, Brian V; Kahle, Kristopher T; Redjal, Navid; Coumans, Jean-Valery

    2011-06-01

    Previous methods to determine stroke prevalence, such as nationwide surveys, are labor-intensive endeavors. Recent advances in search engine query analytics have led to a new metric for disease surveillance to evaluate symptomatic phenomenon, such as influenza. The authors hypothesized that the use of search engine query data can determine the prevalence of stroke. The Google Insights for Search database was accessed to analyze anonymized search engine query data. The authors' search strategy utilized common search queries used when attempting either to identify the signs and symptoms of a stroke or to perform stroke education. The search logic was as follows: (stroke signs + stroke symptoms + mini stroke--heat) from January 1, 2005, to December 31, 2010. The relative number of searches performed (the interest level) for this search logic was established for all 50 states and the District of Columbia. A Pearson product-moment correlation coefficient was calculated from the statespecific stroke prevalence data previously reported. Web search engine interest level was available for all 50 states and the District of Columbia over the time period for January 1, 2005-December 31, 2010. The interest level was highest in Alabama and Tennessee (100 and 96, respectively) and lowest in California and Virginia (58 and 53, respectively). The Pearson correlation coefficient (r) was calculated to be 0.47 (p = 0.0005, 2-tailed). Search engine query data analysis allows for the determination of relative stroke prevalence. Further investigation will reveal the reliability of this metric to determine temporal pattern analysis and prevalence in this and other symptomatic diseases.

  5. Degree program changes and curricular flexibility: Addressing long held beliefs about student progression

    NASA Astrophysics Data System (ADS)

    Ricco, George Dante

    In higher education and in engineering education in particular, changing majors is generally considered a negative event - or at least an event with negative consequences. An emergent field of study within engineering education revolves around understanding the factors and processes driving student changes of major. Of key importance to further the field of change of major research is a grasp of large scale phenomena occurring throughout multiple systems, knowledge of previous attempts at describing such issues, and the adoption of metrics to probe them effectively. The problem posed is exacerbated by the drive in higher education institutions and among state legislatures to understand and reduce time-to-degree and student attrition. With these factors in mind, insights into large-scale processes that affect student progression are essential to evaluating the success or failure of programs. The goals of this work include describing the current educational research on switchers, identifying core concepts and stumbling blocks in my treatment of switchers, and using the Multiple Institutional Database for Investigating Engineering Longitudinal Development (MIDFIELD) to explore how those who change majors perform as a function of large-scale academic pathways within and without the engineering context. To accomplish these goals, it was first necessary to delve into a recent history of the treatment of switchers within the literature and categorize their approach. While three categories of papers exist in the literature concerning change of major, all three may or may not be applicable to a given database of students or even a single institution. Furthermore, while the term has been coined in the literature, no portable metric for discussing large-scale navigational flexibility exists in engineering education. What such a metric would look like will be discussed as well as the delimitations involved. The results and subsequent discussion will include a description of changes of major, how they may or may not have a deleterious effect on one's academic pathway, the special context of changes of major in the pathways of students within first-year engineering programs students labeled as undecided, an exploration of curricular flexibility by the construction of a novel metric, and proposed future work.

  6. Validation of simulated earthquake ground motions based on evolution of intensity and frequency content

    USGS Publications Warehouse

    Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin

    2015-01-01

    Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.

  7. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  8. Systems Engineering Approach and Metrics for Evaluating Network-Centric Operations for U.S. Army Battle Command

    DTIC Science & Technology

    2013-07-01

    Systems Engineering Approach and Metrics for Evaluating Network-Centric Operations for U.S. Army Battle Command by Jock O. Grynovicki and...Battle Command Jock O. Grynovicki and Teresa A. Branscome Human Research and Engineering Directorate, ARL...NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jock O. Grynovicki and Teresa A. Branscome 5d. PROJECT NUMBER 622716H70 5e. TASK NUMBER

  9. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  10. Models and metrics for software management and engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1988-01-01

    This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.

  11. Mining and Utilizing Dataset Relevancy from Oceanographic Dataset (MUDROD) Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Jiang, Y.

    2015-12-01

    Oceanographic resource discovery is a critical step for developing ocean science applications. With the increasing number of resources available online, many Spatial Data Infrastructure (SDI) components (e.g. catalogues and portals) have been developed to help manage and discover oceanographic resources. However, efficient and accurate resource discovery is still a big challenge because of the lack of data relevancy information. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, usage metrics, and user feedback. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the propose engine helps both scientists and general users search for more accurate results with enhanced performance and user experience through a user-friendly interface.

  12. Validation metrics for turbulent plasma transport

    DOE PAGES

    Holland, C.

    2016-06-22

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  13. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C.

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  14. Numerical model validation using experimental data: Application of the area metric on a Francis runner

    NASA Astrophysics Data System (ADS)

    Chatenet, Q.; Tahan, A.; Gagnon, M.; Chamberland-Lauzon, J.

    2016-11-01

    Nowadays, engineers are able to solve complex equations thanks to the increase of computing capacity. Thus, finite elements software is widely used, especially in the field of mechanics to predict part behavior such as strain, stress and natural frequency. However, it can be difficult to determine how a model might be right or wrong, or whether a model is better than another one. Nevertheless, during the design phase, it is very important to estimate how the hydroelectric turbine blades will behave according to the stress to which it is subjected. Indeed, the static and dynamic stress levels will influence the blade's fatigue resistance and thus its lifetime, which is a significant feature. In the industry, engineers generally use either graphic representation, hypothesis tests such as the Student test, or linear regressions in order to compare experimental to estimated data from the numerical model. Due to the variability in personal interpretation (reproducibility), graphical validation is not considered objective. For an objective assessment, it is essential to use a robust validation metric to measure the conformity of predictions against data. We propose to use the area metric in the case of a turbine blade that meets the key points of the ASME Standards and produces a quantitative measure of agreement between simulations and empirical data. This validation metric excludes any belief and criterion of accepting a model which increases robustness. The present work is aimed at applying a validation method, according to ASME V&V 10 recommendations. Firstly, the area metric is applied on the case of a real Francis runner whose geometry and boundaries conditions are complex. Secondly, the area metric will be compared to classical regression methods to evaluate the performance of the method. Finally, we will discuss the use of the area metric as a tool to correct simulations.

  15. Identification of the ideal clutter metric to predict time dependence of human visual search

    NASA Astrophysics Data System (ADS)

    Cartier, Joan F.; Hsu, David H.

    1995-05-01

    The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.

  16. Rationalizing context-dependent performance of dynamic RNA regulatory devices.

    PubMed

    Kent, Ross; Halliwell, Samantha; Young, Kate; Swainston, Neil; Dixon, Neil

    2018-06-21

    The ability of RNA to sense, regulate and store information is an attractive attribute for a variety of functional applications including the development of regulatory control devices for synthetic biology. RNA folding and function is known to be highly context sensitive, which limits the modularity and reuse of RNA regulatory devices to control different heterologous sequences and genes. We explored the cause and effect of sequence context sensitivity for translational ON riboswitches located in the 5' UTR, by constructing and screening a library of N-terminal synonymous codon variants. By altering the N-terminal codon usage we were able to obtain RNA devices with a broad range of functional performance properties (ON, OFF, fold-change). Linear regression and calculated metrics were used to rationalize the major determining features leading to optimal riboswitch performance, and to identify multiple interactions between the explanatory metrics. Finally, partial least squared (PLS) analysis was employed in order to understand the metrics and their respective effect on performance. This PLS model was shown to provide good explanation of our library. This study provides a novel multi-variant analysis framework by which to rationalize the codon context performance of allosteric RNA-devices. The framework will also serve as a platform for future riboswitch context engineering endeavors.

  17. 2.0 AEDL Systems Engineering

    NASA Technical Reports Server (NTRS)

    Graves, Claude

    2005-01-01

    Some engineering topics: Some Initial Thoughts. Capability Description. Capability State-of-the-Art. Capability Requirements. Systems Engineering. Capability Roadmap. Capability Maturity. Candidate Technologies. Metrics.

  18. A Methodology to Assess the Capability of Engine Designs to Meet Closed-Loop Performance and Operability Requirements

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Csank, Jeffrey

    2015-01-01

    Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95 percent response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible controller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed-loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76 percent. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76 percent. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.

  19. A Methodology to Assess the Capability of Engine Designs to Meet Closed-loop Performance and Operability Requirements

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Csank, Jeffrey T.

    2015-01-01

    Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95% response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible con- troller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed- loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76%. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76%. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.

  20. Controls concepts for next generation reuseable rocket engines

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Merrill, Walter C.; Musgrave, Jefferey L.; Ray, Asok

    1995-01-01

    Three primary issues will drive the design and control used in next generation reuseable rocket engines. In addition to steady-state and dynamic performance, the requirements for increased durability, reliability and operability (with faults) will dictate which new controls and design technologies and features will be brought to bear. An array of concepts which have been brought forward will be tested against the measures of cost and benefit as reflected in the above 'ilities'. This paper examines some of the new concepts and looks for metrics to judge their value.

  1. Controls concepts for next generation reuseable rocket engines

    NASA Astrophysics Data System (ADS)

    Lorenzo, Carl F.; Merrill, Walter C.; Musgrave, Jefferey L.; Ray, Asok

    1995-04-01

    Three primary issues will drive the design and control used in next generation reuseable rocket engines. In addition to steady-state and dynamic performance, the requirements for increased durability, reliability and operability (with faults) will dictate which new controls and design technologies and features will be brought to bear. An array of concepts which have been brought forward will be tested against the measures of cost and benefit as reflected in the above 'ilities'. This paper examines some of the new concepts and looks for metrics to judge their value.

  2. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  3. A Trade Study and Metric for Penetration and Sampling Devices for Possible Use on the NASA 2003 and 2005 Mars Sample Return Missions

    NASA Technical Reports Server (NTRS)

    McConnell, Joshua B.

    2000-01-01

    The scientific exploration of Mars will require the collection and return of subterranean samples to Earth for examination. This necessitates the use of some type of device or devices that possesses the ability to effectively penetrate the Martian surface, collect suitable samples and return them to the surface in a manner consistent with imposed scientific constraints. The first opportunity for such a device will occur on the 2003 and 2005 Mars Sample Return missions, being performed by NASA. This paper reviews the work completed on the compilation of a database containing viable penetrating and sampling devices, the performance of a system level trade study comparing selected devices to a set of prescribed parameters and the employment of a metric for the evaluation and ranking of the traded penetration and sampling devices, with respect to possible usage on the 03 and 05 sample return missions. The trade study performed is based on a select set of scientific, engineering, programmatic and socio-political criterion. The use of a metric for the various penetration and sampling devices will act to expedite current and future device selection.

  4. Correlating Computed and Flight Instructor Assessments of Straight-In Landing Approaches by Novice Pilots on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Khan, M. Javed; Rossi, Marcia; Ali, Syed Firasat

    2005-01-01

    The rising cost of flight training and the low cost of powerful computers have resulted in increasing use of PC-based flight simulators. This has prompted FAA standards regulating such use and allowing aspects of training on simulators meeting these standards to be substituted for flight time. However, the FAA regulations require an authorized flight instructor as part of the training environment. Thus, while costs associated with flight time have been reduced, the cost associated with the need for a flight instructor still remains. The obvious area of research, therefore, has been to develop intelligent simulators. However, the two main challenges of such attempts have been training strategies and assessment. The research reported in this paper was conducted to evaluate various performance metrics of a straight-in landing approach by 33 novice pilots flying a light single engine aircraft simulation. These metrics were compared to assessments of these flights by two flight instructors to establish a correlation between the two techniques in an attempt to determine a composite performance metric for this flight maneuver.

  5. Metrics of a Paradigm for Intelligent Control

    NASA Technical Reports Server (NTRS)

    Hexmoor, Henry

    1999-01-01

    We present metrics for quantifying organizational structures of complex control systems intended for controlling long-lived robotic or other autonomous applications commonly found in space applications. Such advanced control systems are often called integration platforms or agent architectures. Reported metrics span concerns about time, resources, software engineering, and complexities in the world.

  6. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C., E-mail: chholland@ucsd.edu

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less

  7. The Case for a Joint Evaluation

    DTIC Science & Technology

    2017-01-01

    intelligence, and engineering. Finally, the comparative time ex - pended by the combatant commanders (CCDRs) on fulfilling four different evaluation...template for the joint-centric construct would align with the four de facto sections noted earlier: an identifica- tion section, a performance metric...intangible or have not been properly researched. For example, under one evaluation system, a Servicemember’s separation or retire- ment into a post

  8. Developing the Systems Engineering Experience Accelerator (SEEA) Prototype and Roadmap

    DTIC Science & Technology

    2012-10-24

    system attributes. These metrics track non-requirements performance, typically relate to production cost per unit, maintenance costs, training costs...immediately implement lessons learned from the training experience to the job, assuming the culture allows this. 1.3 MANAGEMENT PLAN/TECHNICAL OVERVIEW...resolving potential conflicts as they arise. Incrementally implement and continuously integrate capability in priority order, to ensure that final system

  9. Coupled parametric design of flow control and duct shape

    NASA Technical Reports Server (NTRS)

    Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)

    2009-01-01

    A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.

  10. Mean composite fire severity metrics computed with Google Earth engine offer improved accuracy and expanded mapping potential

    Treesearch

    Sean A. Parks; Lisa M. Holsinger; Morgan A. Voss; Rachel A. Loehman; Nathaniel P. Robinson

    2018-01-01

    Landsat-based fire severity datasets are an invaluable resource for monitoring and research purposes. These gridded fire severity datasets are generally produced with pre- and post-fire imagery to estimate the degree of fire-induced ecological change. Here, we introduce methods to produce three Landsat-based fire severity metrics using the Google Earth Engine (GEE)...

  11. Concepts for Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Thomas, Randy; Saus, Joseph

    2007-01-01

    Gas turbine engines for aero-propulsion systems are found to be highly optimized machines after over 70 years of development. Still, additional performance improvements are sought while reduction in the overall cost is increasingly a driving factor. Control systems play a vitally important part in these metrics but are severely constrained by the operating environment and the consequences of system failure. The considerable challenges facing future engine control system design have been investigated. A preliminary analysis has been conducted of the potential benefits of distributed control architecture when applied to aero-engines. In particular, reductions in size, weight, and cost of the control system are possible. NASA is conducting research to further explore these benefits, with emphasis on the particular benefits enabled by high temperature electronics and an open-systems approach to standardized communications interfaces.

  12. Optical and system engineering in the development of a high-quality student telescope kit

    NASA Astrophysics Data System (ADS)

    Pompea, Stephen M.; Pfisterer, Richard N.; Ellis, Scott; Arion, Douglas N.; Fienberg, Richard Tresch; Smith, Thomas C.

    2010-07-01

    The Galileoscope student telescope kit was developed by a volunteer team of astronomers, science education experts, and optical engineers in conjunction with the International Year of Astronomy 2009. This refracting telescope is in production with over 180,000 units produced and distributed with 25,000 units in production. The telescope was designed to be able to resolve the rings of Saturn and to be used in urban areas. The telescope system requirements, performance metrics, and architecture were established after an analysis of current inexpensive telescopes and student telescope kits. The optical design approaches used in the various prototypes and the optical system engineering tradeoffs will be described. Risk analysis, risk management, and change management were critical as was cost management since the final product was to cost around 15 (but had to perform as well as 100 telescopes). In the system engineering of the Galileoscope a variety of analysis and testing approaches were used, including stray light design and analysis using the powerful optical analysis program FRED.

  13. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  14. Measure for Measure: A Guide to Metrication for Workshop Crafts and Technical Studies.

    ERIC Educational Resources Information Center

    Schools Council, London (England).

    This booklet is designed to help teachers of the industrial arts in Great Britain during the changeover to metric units which is due to be substantially completed during the period 1970-1975. General suggestions are given for adapting equipment in metalwork and engineering and woodwork and technical drawing by adding some metric equipment…

  15. The 4A Metric Algorithm: A Unique E-Learning Engineering Solution Designed via Neuroscience to Counter Cheating and Reduce Its Recidivism by Measuring Student Growth through Systemic Sequential Online Learning

    ERIC Educational Resources Information Center

    Osler, James Edward

    2016-01-01

    This paper provides a novel instructional methodology that is a unique E-Learning engineered "4A Metric Algorithm" designed to conceptually address the four main challenges faced by 21st century students, who are tempted to cheat in a myriad of higher education settings (face to face, hybrid, and online). The algorithmic online…

  16. Radiation shielding estimates for manned Mars space flight.

    PubMed

    Dudkin, V E; Kovalev, E E; Kolomensky, A V; Sakovich, V A; Semenov, V F; Demin, V P; Benton, E V

    1992-01-01

    In the analysis of the required radiation shielding protection of spacecraft during a Mars flight, specific effects of solar activity (SA) on the intensity of galactic and solar cosmic rays were taken into consideration. Three spaceflight periods were considered: (1) maximum SA; (2) minimum SA; and (3) intermediate SA, when intensities of both galactic and solar cosmic rays are moderately high. Scenarios of spaceflights utilizing liquid-propellant rocket engines, low- and intermediate-thrust nuclear electrojet engines, and nuclear rocket engines, all of which have been designed in the Soviet Union, are reviewed. Calculations were performed on the basis of a set of standards for radiation protection approved by the U.S.S.R. State Committee for Standards. It was found that the lowest estimated mass of a Mars spacecraft, including the radiation shielding mass, obtained using a combination of a liquid propellant engine with low and intermediate thrust nuclear electrojet engines, would be 500-550 metric tons.

  17. Model-based metrics of human-automation function allocation in complex work environments

    NASA Astrophysics Data System (ADS)

    Kim, So Young

    Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation. Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance. Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted. This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.

  18. ARROWSMITH-P: A prototype expert system for software engineering management

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Ramsey, Connie Loggia

    1985-01-01

    Although the field of software engineering is relatively new, it can benefit from the use of expert systems. Two prototype expert systems were developed to aid in software engineering management. Given the values for certain metrics, these systems will provide interpretations which explain any abnormal patterns of these values during the development of a software project. The two systems, which solve the same problem, were built using different methods, rule-based deduction and frame-based abduction. A comparison was done to see which method was better suited to the needs of this field. It was found that both systems performed moderately well, but the rule-based deduction system using simple rules provided more complete solutions than did the frame-based abduction system.

  19. A comparison of color fidelity metrics for light sources using simulation of color samples under lighting conditions

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Kang, Yoojin; Jang, Junwoo

    2017-09-01

    Color fidelity has been used as one of indices to evaluate the performance of light sources. Since the Color Rendering Index (CRI) was proposed at CIE, many color fidelity metrics have been proposed to increase the accuracy of the metric. This paper focuses on a comparison of the color fidelity metrics in an aspect of accuracy with human visual assessments. To visually evaluate the color fidelity of light sources, we made a simulator that reproduces the color samples under lighting conditions. In this paper, eighteen color samples of the Macbeth color checker under test light sources and reference illuminant for each of them are simulated and displayed on a well-characterized monitor. With only a spectrum set of the test light source and reference illuminant, color samples under any lighting condition can be reproduced. In this paper, the spectrums of the two LED and two OLED light sources that have similar values of CRI are used for the visual assessment. In addition, the results of the visual assessment are compared with the two color fidelity metrics that include CRI and IES TM-30-15 (Rf), proposed by Illuminating Engineering Society (IES) in 2015. Experimental results indicate that Rf outperforms CRI in terms of the correlation with visual assessment.

  20. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  1. 40 CFR 1037.801 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... resistance tire means a tire on a vocational vehicle with a TRRL at or below of 7.7 kg/metric ton, a steer tire on a tractor with a TRRL at or below 7.7 kg/metric ton, or a drive tire on a tractor with a TRRL at or below 8.1 kg/metric ton. Manufacture means the physical and engineering process of designing...

  2. Relational Agreement Measures for Similarity Searching of Cheminformatic Data Sets.

    PubMed

    Rivera-Borroto, Oscar Miguel; García-de la Vega, José Manuel; Marrero-Ponce, Yovani; Grau, Ricardo

    2016-01-01

    Research on similarity searching of cheminformatic data sets has been focused on similarity measures using fingerprints. However, nominal scales are the least informative of all metric scales, increasing the tied similarity scores, and decreasing the effectivity of the retrieval engines. Tanimoto's coefficient has been claimed to be the most prominent measure for this task. Nevertheless, this field is far from being exhausted since the computer science no free lunch theorem predicts that "no similarity measure has overall superiority over the population of data sets". We introduce 12 relational agreement (RA) coefficients for seven metric scales, which are integrated within a group fusion-based similarity searching algorithm. These similarity measures are compared to a reference panel of 21 proximity quantifiers over 17 benchmark data sets (MUV), by using informative descriptors, a feature selection stage, a suitable performance metric, and powerful comparison tests. In this stage, RA coefficients perform favourably with repect to the state-of-the-art proximity measures. Afterward, the RA-based method outperform another four nearest neighbor searching algorithms over the same data domains. In a third validation stage, RA measures are successfully applied to the virtual screening of the NCI data set. Finally, we discuss a possible molecular interpretation for these similarity variants.

  3. ACSYNT inner loop flight control design study

    NASA Technical Reports Server (NTRS)

    Bortins, Richard; Sorensen, John A.

    1993-01-01

    The NASA Ames Research Center developed the Aircraft Synthesis (ACSYNT) computer program to synthesize conceptual future aircraft designs and to evaluate critical performance metrics early in the design process before significant resources are committed and cost decisions made. ACSYNT uses steady-state performance metrics, such as aircraft range, payload, and fuel consumption, and static performance metrics, such as the control authority required for the takeoff rotation and for landing with an engine out, to evaluate conceptual aircraft designs. It can also optimize designs with respect to selected criteria and constraints. Many modern aircraft have stability provided by the flight control system rather than by the airframe. This may allow the aircraft designer to increase combat agility, or decrease trim drag, for increased range and payload. This strategy requires concurrent design of the airframe and the flight control system, making trade-offs of performance and dynamics during the earliest stages of design. ACSYNT presently lacks means to implement flight control system designs but research is being done to add methods for predicting rotational degrees of freedom and control effector performance. A software module to compute and analyze the dynamics of the aircraft and to compute feedback gains and analyze closed loop dynamics is required. The data gained from these analyses can then be fed back to the aircraft design process so that the effects of the flight control system and the airframe on aircraft performance can be included as design metrics. This report presents results of a feasibility study and the initial design work to add an inner loop flight control system (ILFCS) design capability to the stability and control module in ACSYNT. The overall objective is to provide a capability for concurrent design of the aircraft and its flight control system, and enable concept designers to improve performance by exploiting the interrelationships between aircraft and flight control system design parameters.

  4. Indicators and Metrics for Evaluating the Sustainability of Chemical Processes

    EPA Science Inventory

    A metric-based method, called GREENSCOPE, has been developed for evaluating process sustainability. Using lab-scale information and engineering assumptions the method evaluates full-scale epresentations of processes in environmental, efficiency, energy and economic areas. The m...

  5. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  6. Orion Flight Performance Design Trades

    NASA Technical Reports Server (NTRS)

    Jackson, Mark C.; Straube, Timothy

    2010-01-01

    A significant portion of the Orion pre-PDR design effort has focused on balancing mass with performance. High level performance metrics include abort success rates, lunar surface coverage, landing accuracy and touchdown loads. These metrics may be converted to parameters that affect mass, such as ballast for stabilizing the abort vehicle, propellant to achieve increased lunar coverage or extended missions, or ballast to increase the lift-to-drag ratio to improve entry and landing performance. The Orion Flight Dynamics team was tasked to perform analyses to evaluate many of these trades. These analyses not only provide insight into the physics of each particular trade but, in aggregate, they illustrate the processes used by Orion to balance performance and mass margins, and thereby make design decisions. Lessons learned can be gleaned from a review of these studies which will be useful to other spacecraft system designers. These lessons fall into several categories, including: appropriate application of Monte Carlo analysis in design trades, managing margin in a highly mass-constrained environment, and the use of requirements to balance margin between subsystems and components. This paper provides a review of some of the trades and analyses conducted by the Flight Dynamics team, as well as systems engineering lessons learned.

  7. Modular Engine Noise Component Prediction System (MCP) Program Users' Guide

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Herkes, William H.; Reed, David H.

    2004-01-01

    This is a user's manual for Modular Engine Noise Component Prediction System (MCP). This computer code allows the user to predict turbofan engine noise estimates. The program is based on an empirical procedure that has evolved over many years at The Boeing Company. The data used to develop the procedure include both full-scale engine data and small-scale model data, and include testing done by Boeing, by the engine manufacturers, and by NASA. In order to generate a noise estimate, the user specifies the appropriate engine properties (including both geometry and performance parameters), the microphone locations, the atmospheric conditions, and certain data processing options. The version of the program described here allows the user to predict three components: inlet-radiated fan noise, aft-radiated fan noise, and jet noise. MCP predicts one-third octave band noise levels over the frequency range of 50 to 10,000 Hertz. It also calculates overall sound pressure levels and certain subjective noise metrics (e.g., perceived noise levels).

  8. Effects of metric change on safety in the workplace for selected occupations

    NASA Astrophysics Data System (ADS)

    Lefande, J. M.; Pokorney, J. L.

    1982-04-01

    The study assesses the potential safety issues of metric conversion in the workplace. A purposive sample of 35 occupations based on injury and illnesses indexes were assessed. After an analysis of workforce population, hazard analysis and measurement sensitivity of the occupations, jobs were analyzed to identify potential safety hazards by industrial hygienists, safety engineers and academia. The study's major findings were as follows: No metric hazard experience was identified. An increased exposure might occur when particular jobs and their job tasks are going the transition from customary measurement to metric measurement. Well planned metric change programs reduce hazard potential. Metric safety issues are unresolved in the aviation industry.

  9. Translation from UML to Markov Model: A Performance Modeling Framework

    NASA Astrophysics Data System (ADS)

    Khan, Razib Hayat; Heegaard, Poul E.

    Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

  10. Riemannian Metric Optimization on Surfaces (RMOS) for Intrinsic Brain Mapping in the Laplace-Beltrami Embedding Space

    PubMed Central

    Gahm, Jin Kyu; Shi, Yonggang

    2018-01-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399

  11. Software Engineering Education Directory

    DTIC Science & Technology

    1990-04-01

    and Engineering (CMSC 735) Codes: GPEV2 * Textiooks: IEEE Tutoria on Models and Metrics for Software Management and Engameeing by Basi, Victor R...Software Engineering (Comp 227) Codes: GPRY5 Textbooks: IEEE Tutoria on Software Design Techniques by Freeman, Peter and Wasserman, Anthony 1. Software

  12. Integrated Tools for Future Distributed Engine Control Technologies

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Thomas, Randy; Saus, Joseph

    2013-01-01

    Turbine engines are highly complex mechanical systems that are becoming increasingly dependent on control technologies to achieve system performance and safety metrics. However, the contribution of controls to these measurable system objectives is difficult to quantify due to a lack of tools capable of informing the decision makers. This shortcoming hinders technology insertion in the engine design process. NASA Glenn Research Center is developing a Hardware-inthe- Loop (HIL) platform and analysis tool set that will serve as a focal point for new control technologies, especially those related to the hardware development and integration of distributed engine control. The HIL platform is intended to enable rapid and detailed evaluation of new engine control applications, from conceptual design through hardware development, in order to quantify their impact on engine systems. This paper discusses the complex interactions of the control system, within the context of the larger engine system, and how new control technologies are changing that paradigm. The conceptual design of the new HIL platform is then described as a primary tool to address those interactions and how it will help feed the insertion of new technologies into future engine systems.

  13. Control of Total Ownership Costs of DoD Acquisition Development Programs Through Integrated Systems Engineering Processes and Metrics

    DTIC Science & Technology

    2011-04-30

    Preface & Acknowledgements During his internship with the Graduate School of Business & Public Policy in June 2010, U.S. Air Force Academy Cadet...unlimited. Prepared for the Naval Postgraduate School , Monterey, California 93943 Disclaimer: The views represented in this report are those of the... School ,Monterey,CA,93943 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S

  14. A Tutorial on Electro-Optical/Infrared (EO/IR) Theory and Systems

    DTIC Science & Technology

    2013-01-01

    engine of a small UAV to an intercontinental ballistic missile (ICBM) launch. Comparison of the available energy at the sensor to the noise level...of the sensor provides the central metric of sensor performance, the noise equivalent irradiance or NEI. The problem of extracting the target from...effectiveness of imaging systems can be degraded by many factors, including limited contrast and luminance, the presence of noise , and blurring due to

  15. A Comparison of Source Code Plagiarism Detection Engines

    NASA Astrophysics Data System (ADS)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  16. A mechanical argument for the differential performance of coronary artery grafts.

    PubMed

    Prim, David A; Zhou, Boran; Hartstone-Rose, Adam; Uline, Mark J; Shazly, Tarek; Eberth, John F

    2016-02-01

    Coronary artery bypass grafting (CABG) acutely disturbs the homeostatic state of the transplanted vessel making retention of graft patency dependent on chronic remodeling processes. The time course and extent to which remodeling restores vessel homeostasis will depend, in part, on the nature and magnitude of the mechanical disturbances induced upon transplantation. In this investigation, biaxial mechanical testing and histology were performed on the porcine left anterior descending artery (LAD) and analogs of common autografts, including the internal thoracic artery (ITA), radial artery (RA), great saphenous vein (GSV) and lateral saphenous vein (LSV). Experimental data were used to quantify the parameters of a structure-based constitutive model enabling prediction of the acute vessel mechanical response pre-transplantation and under coronary loading conditions. A novel metric Ξ was developed to quantify mechanical differences between each graft vessel in situ and the LAD in situ, while a second metric Ω compares the graft vessels in situ to their state under coronary loading. The relative values of these metrics among candidate autograft sources are consistent with vessel-specific variations in CABG clinical success rates with the ITA as the superior and GSV the inferior graft choices based on mechanical performance. This approach can be used to evaluate other candidate tissues for grafting or to aid in the development of synthetic and tissue engineered alternatives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    PubMed

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.

  18. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  19. Point spread function engineering for iris recognition system design.

    PubMed

    Ashok, Amit; Neifeld, Mark A

    2010-04-01

    Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.

  20. System International d'Unites: Metric Measurement in Water Resources Engineering.

    ERIC Educational Resources Information Center

    Klingeman, Peter C.

    This pamphlet gives definitions and symbols for the basic and derived metric units, prefixes, and conversion factors for units frequently used in water resources. Included are conversion factors for units of area, work, heat, power, pressure, viscosity, flow rate, and others. (BB)

  1. Metrics for Operator Situation Awareness, Workload, and Performance in Automated Separation Assurance Systems

    NASA Technical Reports Server (NTRS)

    Strybel, Thomas Z.; Vu, Kim-Phuong L.; Battiste, Vernol; Dao, Arik-Quang; Dwyer, John P.; Landry, Steven; Johnson, Walter; Ho, Nhut

    2011-01-01

    A research consortium of scientists and engineers from California State University Long Beach (CSULB), San Jose State University Foundation (SJSUF), California State University Northridge (CSUN), Purdue University, and The Boeing Company was assembled to evaluate the impact of changes in roles and responsibilities and new automated technologies, being introduced in the Next Generation Air Transportation System (NextGen), on operator situation awareness (SA) and workload. To meet these goals, consortium members performed systems analyses of NextGen concepts and airspace scenarios, and concurrently evaluated SA, workload, and performance measures to assess their appropriateness for evaluations of NextGen concepts and tools. The following activities and accomplishments were supported by the NRA: a distributed simulation, metric development, systems analysis, part-task simulations, and large-scale simulations. As a result of this NRA, we have gained a greater understanding of situation awareness and its measurement, and have shared our knowledge with the scientific community. This network provides a mechanism for consortium members, colleagues, and students to pursue research on other topics in air traffic management and aviation, thus enabling them to make greater contributions to the field

  2. A Hierarchical structure of key performance indicators for operation management and continuous improvement in production systems

    PubMed Central

    Kang, Ningxuan; Zhao, Cong; Li, Jingshan; Horst, John A.

    2018-01-01

    Key performance indicators (KPIs) are critical for manufacturing operation management and continuous improvement (CI). In modern manufacturing systems, KPIs are defined as a set of metrics to reflect operation performance, such as efficiency, throughput, availability, from productivity, quality and maintenance perspectives. Through continuous monitoring and measurement of KPIs, meaningful quantification and identification of different aspects of operation activities can be obtained, which enable and direct CI efforts. A set of 34 KPIs has been introduced in ISO 22400. However, the KPIs in a manufacturing system are not independent, and they may have intrinsic mutual relationships. The goal of this paper is to introduce a multi-level structure for identification and analysis of KPIs and their intrinsic relationships in production systems. Specifically, through such a hierarchical structure, we define and layer KPIs into levels of basic KPIs, comprehensive KPIs and their supporting metrics, and use it to investigate the relationships and dependencies between KPIs. Such a study can provide a useful tool for manufacturing engineers and managers to measure and utilize KPIs for CI. PMID:29398722

  3. A Hierarchical structure of key performance indicators for operation management and continuous improvement in production systems.

    PubMed

    Kang, Ningxuan; Zhao, Cong; Li, Jingshan; Horst, John A

    2016-01-01

    Key performance indicators (KPIs) are critical for manufacturing operation management and continuous improvement (CI). In modern manufacturing systems, KPIs are defined as a set of metrics to reflect operation performance, such as efficiency, throughput, availability, from productivity, quality and maintenance perspectives. Through continuous monitoring and measurement of KPIs, meaningful quantification and identification of different aspects of operation activities can be obtained, which enable and direct CI efforts. A set of 34 KPIs has been introduced in ISO 22400. However, the KPIs in a manufacturing system are not independent, and they may have intrinsic mutual relationships. The goal of this paper is to introduce a multi-level structure for identification and analysis of KPIs and their intrinsic relationships in production systems. Specifically, through such a hierarchical structure, we define and layer KPIs into levels of basic KPIs, comprehensive KPIs and their supporting metrics, and use it to investigate the relationships and dependencies between KPIs. Such a study can provide a useful tool for manufacturing engineers and managers to measure and utilize KPIs for CI.

  4. A resilience-oriented approach for quantitatively assessing recurrent spatial-temporal congestion on urban roads.

    PubMed

    Tang, Junqing; Heinimann, Hans Rudolf

    2018-01-01

    Traffic congestion brings not only delay and inconvenience, but other associated national concerns, such as greenhouse gases, air pollutants, road safety issues and risks. Identification, measurement, tracking, and control of urban recurrent congestion are vital for building a livable and smart community. A considerable amount of works has made contributions to tackle the problem. Several methods, such as time-based approaches and level of service, can be effective for characterizing congestion on urban streets. However, studies with systemic perspectives have been minor in congestion quantification. Resilience, on the other hand, is an emerging concept that focuses on comprehensive systemic performance and characterizes the ability of a system to cope with disturbance and to recover its functionality. In this paper, we symbolized recurrent congestion as internal disturbance and proposed a modified metric inspired by the well-applied "R4" resilience-triangle framework. We constructed the metric with generic dimensions from both resilience engineering and transport science to quantify recurrent congestion based on spatial-temporal traffic patterns and made the comparison with other two approaches in freeway and signal-controlled arterial cases. Results showed that the metric can effectively capture congestion patterns in the study area and provides a quantitative benchmark for comparison. Also, it suggested not only a good comparative performance in measuring strength of proposed metric, but also its capability of considering the discharging process in congestion. The sensitivity tests showed that proposed metric possesses robustness against parameter perturbation in Robustness Range (RR), but the number of identified congestion patterns can be influenced by the existence of ϵ. In addition, the Elasticity Threshold (ET) and the spatial dimension of cell-based platform differ the congestion results significantly on both the detected number and intensity. By tackling this conventional problem with emerging concept, our metric provides a systemic alternative approach and enriches the toolbox for congestion assessment. Future work will be conducted on a larger scale with multiplex scenarios in various traffic conditions.

  5. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  6. Defense AT and L Magazine. Vol. 46, no. 4, July-August 2017

    DTIC Science & Technology

    2017-07-01

    engineering. He also is a certified Project Manage - ment Professional. Supply Chain Management SCM has become a vital tool used in today’s global economy to...modifying the project management dashboards originally developed by one of the authors, described in “Leveraging Fidelity of Performance­Based Metric...Tools for Project Man­ agement,” an article in the January­February 2003 issue of Program Manager , the predecessor of Defense AT&L magazine

  7. Aircraft Conceptual Design and Risk Analysis Using Physics-Based Noise Prediction

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.; Mavris, Dimitri N.

    2006-01-01

    An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid trade-off and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The methodology was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and take-off and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.

  8. Calculation and use of an environment's characteristic software metric set

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    Since both cost/quality and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. The approach is applied in the Software Engineering Laboratory (SEL), a NASA Goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 to 112,000 lines) projects. For this particular environment, the method yielded the characteristic metric set (source lines, fault correction effort per executable statement, design effort, code effort, number of I/O parameters, number of versions). The uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data.

  9. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    PubMed

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  11. Integrating automated support for a software management cycle into the TAME system

    NASA Technical Reports Server (NTRS)

    Sunazuka, Toshihiko; Basili, Victor R.

    1989-01-01

    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.

  12. Temporal development of near-native functional properties and correlations with qMRI in self-assembling fibrocartilage treated with exogenous lysyl oxidase homolog 2.

    PubMed

    Hadidi, Pasha; Cissell, Derek D; Hu, Jerry C; Athanasiou, Kyriacos A

    2017-12-01

    Advances in cartilage tissue engineering have led to constructs with mechanical integrity and biochemical composition increasingly resembling that of native tissues. In particular, collagen cross-linking with lysyl oxidase has been used to significantly enhance the mechanical properties of engineered neotissues. In this study, development of collagen cross-links over time, and correlations with tensile properties, were examined in self-assembling neotissues. Additionally, quantitative MRI metrics were examined in relation to construct mechanical properties as well as pyridinoline cross-link content and other engineered tissue components. Scaffold-free meniscus fibrocartilage was cultured in the presence of exogenous lysyl oxidase, and assessed at multiple time points over 8weeks starting from the first week of culture. Engineered constructs demonstrated a 9.9-fold increase in pyridinoline content, reaching 77% of native tissue values, after 8weeks of culture. Additionally, engineered tissues reached 66% of the Young's modulus in the radial direction of native tissues. Further, collagen cross-links were found to correlate with tensile properties, contributing 67% of the tensile strength of engineered neocartilages. Finally, examination of quantitative MRI metrics revealed several correlations with mechanical and biochemical properties of engineered constructs. This study displays the importance of culture duration for collagen cross-link formation, and demonstrates the potential of quantitative MRI in investigating properties of engineered cartilages. This is the first study to demonstrate near-native cross-link content in an engineered tissue, and the first study to quantify pyridinoline cross-link development over time in a self-assembling tissue. Additionally, this work shows the relative contributions of collagen and pyridinoline to the tensile properties of collagenous tissue for the first time. Furthermore, this is the first investigation to identify a relationship between qMRI metrics and the pyridinoline cross-link content of an engineered collagenous tissue. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  13. Understanding Chemistry-Specific Fuel Differences at a Constant RON in a Boosted SI Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szybist, James P.; Splitter, Derek A.

    The goal of the US Department of Energy Co-Optimization of Fuels and Engines (Co-Optima) initiative is to accelerate the development of advanced fuels and engines for higher efficiency and lower emissions. A guiding principle of this initiative is the central fuel properties hypothesis (CFPH), which states that fuel properties provide an indication of a fuel’s performance, regardless of its chemical composition. This is an important consideration for Co-Optima because many of the fuels under consideration are from bio-derived sources with chemical compositions that are unconventional relative to petroleum-derived gasoline or ethanol. In this study, we investigated a total of sevenmore » fuels in a spark ignition engine under boosted operating conditions to determine whether knock propensity is predicted by fuel antiknock metrics: antiknock index (AKI), research octane number (RON), and octane index (OI). Six of these fuels have a constant RON value but otherwise represent a wide range of fuel properties and chemistry. Consistent with previous studies, we found that OI was a much better predictor of knock propensity that either AKI or RON. However, we also found that there were significant fuel-specific deviations from the OI predictions. Combustion analysis provided insight that fuel kinetic complexities, including the presence of pre-spark heat release, likely limits the ability of standardized tests and metrics to accurately predict knocking tendency at all operating conditions. While limitations of OI were revealed in this study, we found that fuels with unconventional chemistry, in particular esters and ethers, behaved in accordance with CFPH as well as petroleum-derived fuels.« less

  14. Understanding Chemistry-Specific Fuel Differences at a Constant RON in a Boosted SI Engine

    DOE PAGES

    Szybist, James P.; Splitter, Derek A.

    2018-01-02

    The goal of the US Department of Energy Co-Optimization of Fuels and Engines (Co-Optima) initiative is to accelerate the development of advanced fuels and engines for higher efficiency and lower emissions. A guiding principle of this initiative is the central fuel properties hypothesis (CFPH), which states that fuel properties provide an indication of a fuel’s performance, regardless of its chemical composition. This is an important consideration for Co-Optima because many of the fuels under consideration are from bio-derived sources with chemical compositions that are unconventional relative to petroleum-derived gasoline or ethanol. In this study, we investigated a total of sevenmore » fuels in a spark ignition engine under boosted operating conditions to determine whether knock propensity is predicted by fuel antiknock metrics: antiknock index (AKI), research octane number (RON), and octane index (OI). Six of these fuels have a constant RON value but otherwise represent a wide range of fuel properties and chemistry. Consistent with previous studies, we found that OI was a much better predictor of knock propensity that either AKI or RON. However, we also found that there were significant fuel-specific deviations from the OI predictions. Combustion analysis provided insight that fuel kinetic complexities, including the presence of pre-spark heat release, likely limits the ability of standardized tests and metrics to accurately predict knocking tendency at all operating conditions. While limitations of OI were revealed in this study, we found that fuels with unconventional chemistry, in particular esters and ethers, behaved in accordance with CFPH as well as petroleum-derived fuels.« less

  15. Critical evaluation of reverse engineering tool Imagix 4D!

    PubMed

    Yadav, Rashmi; Patel, Ravindra; Kothari, Abhay

    2016-01-01

    The comprehension of legacy codes is difficult to understand. Various commercial reengineering tools are available that have unique working styles, and are equipped with their inherent capabilities and shortcomings. The focus of the available tools is in visualizing static behavior not the dynamic one. Therefore, it is difficult for people who work in software product maintenance, code understanding reengineering/reverse engineering. Consequently, the need for a comprehensive reengineering/reverse engineering tool arises. We found the usage of Imagix 4D to be good as it generates the maximum pictorial representations in the form of flow charts, flow graphs, class diagrams, metrics and, to a partial extent, dynamic visualizations. We evaluated Imagix 4D with the help of a case study involving a few samples of source code. The behavior of the tool was analyzed on multiple small codes and a large code gcc C parser. Large code evaluation was performed to uncover dead code, unstructured code, and the effect of not including required files at preprocessing level. The utility of Imagix 4D to prepare decision density and complexity metrics for a large code was found to be useful in getting to know how much reengineering is required. At the outset, Imagix 4D offered limitations in dynamic visualizations, flow chart separation (large code) and parsing loops. The outcome of evaluation will eventually help in upgrading Imagix 4D and posed a need of full featured tools in the area of software reengineering/reverse engineering. It will also help the research community, especially those who are interested in the realm of software reengineering tool building.

  16. Neutronics Design of a Thorium-Fueled Fission Blanket for LIFE (Laser Inertial Fusion-based Energy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powers, J; Abbott, R; Fratoni, M

    The Laser Inertial Fusion-based Energy (LIFE) project at LLNL includes development of hybrid fusion-fission systems for energy generation. These hybrid LIFE engines use high-energy neutrons from laser-based inertial confinement fusion to drive a subcritical blanket of fission fuel that surrounds the fusion chamber. The fission blanket contains TRISO fuel particles packed into pebbles in a flowing bed geometry cooled by a molten salt (flibe). LIFE engines using a thorium fuel cycle provide potential improvements in overall fuel cycle performance and resource utilization compared to using depleted uranium (DU) and may minimize waste repository and proliferation concerns. A preliminary engine designmore » with an initial loading of 40 metric tons of thorium can maintain a power level of 2000 MW{sub th} for about 55 years, at which point the fuel reaches an average burnup level of about 75% FIMA. Acceptable performance was achieved without using any zero-flux environment 'cooling periods' to allow {sup 233}Pa to decay to {sup 233}U; thorium undergoes constant irradiation in this LIFE engine design to minimize proliferation risks and fuel inventory. Vast reductions in end-of-life (EOL) transuranic (TRU) inventories compared to those produced by a similar uranium system suggest reduced proliferation risks. Decay heat generation in discharge fuel appears lower for a thorium LIFE engine than a DU engine but differences in radioactive ingestion hazard are less conclusive. Future efforts on development of thorium-fueled LIFE fission blankets engine development will include design optimization, fuel performance analysis work, and further waste disposal and nonproliferation analyses.« less

  17. Performance of a normalized energy metric without jammer state information for an FH/MFSK system in worst case partial band jamming

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.

  18. A Literature Review and Experimental Plan for Research on the Display of Information on Matrix-Addressable Displays.

    DTIC Science & Technology

    1987-02-01

    Factors Laboratory, Department of Industria AREA 6 WORK UNIT NUAE1 Engineering and Operations Research, Virginia Pol - technic Institute & State Univ...Symbolic Research 105 Experiment 14: Multichromatic Optimum Character Symbolic 105 Summary 105 Quality Metrics Analysis 105 REFERENCES 107 ANNOTATED...17.52 12.26 9.43 7.66 6.45 5.57 An analysis of variance was performed on accuracy and response time data. For accuracy data there was a significant

  19. Annual Systems Engineering Conference: Focusing on Improving Performance of Defense Systems Programs (10th). Volume 3. Thursday Presentations

    DTIC Science & Technology

    2007-10-25

    the Phit <.0001 requirement) restricts tactical delivery conditions, the probability of a fragment hit may be further qualified by considering only...Pkill – UK uses “self damage” metric • Risk Analysis: “If the above procedures ( Phit or Pkill <.0001) still result in restricting tactical delivery...10 (From NAWCWD Briefing) 4 Safe Escape Analysis Requirements Calculate Phit ,Pkill, and Pdet Is Phit <= .0001 for all launch conditions Done NO YES

  20. Space station definition and preliminary design, WP-01. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Lenda, J. A.

    1987-01-01

    System activities are summarized and an overview of the system level engineering tasks performed are provided. Areas discussed include requirements, system test and verification, the advanced development plan, customer accommodations, software, growth, productivity, operations, product assurance and metrication. The hardware element study results are summarized. Overviews of recommended configurations are provided for the core module, the USL, the logistics elements, the propulsion subsystems, reboost, vehicle accommodations, and the smart front end. A brief overview is provided for costing activities.

  1. Engineering and Technology in Wheelchair Sport.

    PubMed

    Cooper, Rory A; Tuakli-Wosornu, Yetsa A; Henderson, Geoffrey V; Quinby, Eleanor; Dicianno, Brad E; Tsang, Kalai; Ding, Dan; Cooper, Rosemarie; Crytzer, Theresa M; Koontz, Alicia M; Rice, Ian; Bleakney, Adam W

    2018-05-01

    Technologies capable of projecting injury and performance metrics to athletes and coaches are being developed. Wheelchair athletes must be cognizant of their upper limb health; therefore, systems must be designed to promote efficient transfer of energy to the handrims and evaluated for simultaneous effects on the upper limbs. This article is brief review of resources that help wheelchair users increase physiologic response to exercise, develop ideas for adaptive workout routines, locate accessible facilities and outdoor areas, and develop wheelchair sports-specific skills. Published by Elsevier Inc.

  2. Driver Injury Risk Variability in Finite Element Reconstructions of Crash Injury Research and Engineering Network (CIREN) Frontal Motor Vehicle Crashes.

    PubMed

    Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D

    2015-01-01

    A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2 hemomediastinum. Stress-based metrics were used to predict injury to the lower leg of the Camry case occupant. The regional-level injury metrics evaluated for the Cobalt case occupant indicated a low risk of injury; however, strain-based injury metrics better predicted pulmonary contusion. Approximately 49% of the Cobalt occupant's left lung was contused, though the baseline simulation predicted 40.5% of the lung to be injured. A method to compute injury metrics and risks as functions of precrash occupant position was developed and applied to 2 CIREN MVC FE reconstructions. The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to further understand important factors that lead to more severe MVC injuries.

  3. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  4. Data-driven fault detection, isolation and estimation of aircraft gas turbine engine actuator and sensors

    NASA Astrophysics Data System (ADS)

    Naderi, E.; Khorasani, K.

    2018-02-01

    In this work, a data-driven fault detection, isolation, and estimation (FDI&E) methodology is proposed and developed specifically for monitoring the aircraft gas turbine engine actuator and sensors. The proposed FDI&E filters are directly constructed by using only the available system I/O data at each operating point of the engine. The healthy gas turbine engine is stimulated by a sinusoidal input containing a limited number of frequencies. First, the associated system Markov parameters are estimated by using the FFT of the input and output signals to obtain the frequency response of the gas turbine engine. These data are then used for direct design and realization of the fault detection, isolation and estimation filters. Our proposed scheme therefore does not require any a priori knowledge of the system linear model or its number of poles and zeros at each operating point. We have investigated the effects of the size of the frequency response data on the performance of our proposed schemes. We have shown through comprehensive case studies simulations that desirable fault detection, isolation and estimation performance metrics defined in terms of the confusion matrix criterion can be achieved by having access to only the frequency response of the system at only a limited number of frequencies.

  5. A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells

    PubMed Central

    Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.

    2017-01-01

    Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402

  6. Antenna coupled photonic wire lasers

    DOE PAGES

    Kao, Tsung-Kao; Cai, Xiaowei; Lee, Alan W. M.; ...

    2015-06-22

    Slope efficiency (SE) is an important performance metric for lasers. In conventional semiconductor lasers, SE can be optimized by careful designs of the facet (or the modulation for DFB lasers) dimension and surface. However, photonic wire lasers intrinsically suffer low SE due to their deep sub-wavelength emitting facets. Inspired by microwave engineering techniques, we show a novel method to extract power from wire lasers using monolithically integrated antennas. These integrated antennas significantly increase the effective radiation area, and consequently enhance the power extraction efficiency. When applied to wire lasers at THz frequency, we achieved the highest single-side slope efficiency (~450more » mW/A) in pulsed mode for DFB lasers at 4 THz and a ~4x increase in output power at 3 THz compared with a similar structure without antennas. This work demonstrates the versatility of incorporating microwave engineering techniques into laser designs, enabling significant performance enhancements.« less

  7. Measurement System for Energetic Materials Decomposition

    DTIC Science & Technology

    2015-01-05

    scholarships or fellowships for further studies in science, mathematics, engineering or technology fields: Student Metrics This section only applies to...science, mathematics, engineering, or technology fields: The number of undergraduates funded by your agreement who graduated during this period and...will continue to pursue a graduate or Ph.D. degree in science, mathematics, engineering, or technology fields

  8. Investigation of Tapered Roller Bearing Damage Detection Using Oil Debris Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Krieder, Gary; Fichter, Thomas

    2006-01-01

    A diagnostic tool was developed for detecting fatigue damage to tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. This diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests performed by The Timken Company in their Tapered Roller Bearing Health Monitoring Test Rig. Failure progression tests were performed under simulated engine load conditions. Tests were performed on one healthy bearing and three predamaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of debris generated during failure of the bearing. The bearing was removed periodically for inspection throughout the failure progression tests. Results indicate the accumulated oil debris mass is a good predictor of damage on tapered roller bearings. The use of a fuzzy logic model to enable an easily interpreted diagnostic metric was proposed and demonstrated.

  9. New Quality Metrics for Web Search Results

    NASA Astrophysics Data System (ADS)

    Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni

    Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.

  10. Reusable Rocket Engine Operability Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Komar, D. R.

    1998-01-01

    This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.

  11. Sustainable water management under future uncertainty with eco-engineering decision scaling

    NASA Astrophysics Data System (ADS)

    Poff, N. Leroy; Brown, Casey M.; Grantham, Theodore E.; Matthews, John H.; Palmer, Margaret A.; Spence, Caitlin M.; Wilby, Robert L.; Haasnoot, Marjolijn; Mendoza, Guillermo F.; Dominique, Kathleen C.; Baeza, Andres

    2016-01-01

    Managing freshwater resources sustainably under future climatic and hydrological uncertainty poses novel challenges. Rehabilitation of ageing infrastructure and construction of new dams are widely viewed as solutions to diminish climate risk, but attaining the broad goal of freshwater sustainability will require expansion of the prevailing water resources management paradigm beyond narrow economic criteria to include socially valued ecosystem functions and services. We introduce a new decision framework, eco-engineering decision scaling (EEDS), that explicitly and quantitatively explores trade-offs in stakeholder-defined engineering and ecological performance metrics across a range of possible management actions under unknown future hydrological and climate states. We illustrate its potential application through a hypothetical case study of the Iowa River, USA. EEDS holds promise as a powerful framework for operationalizing freshwater sustainability under future hydrological uncertainty by fostering collaboration across historically conflicting perspectives of water resource engineering and river conservation ecology to design and operate water infrastructure for social and environmental benefits.

  12. Sustainable water management under future uncertainty with eco-engineering decision scaling

    USGS Publications Warehouse

    Poff, N LeRoy; Brown, Casey M; Grantham, Theodore E.; Matthews, John H; Palmer, Margaret A.; Spence, Caitlin M; Wilby, Robert L.; Haasnoot, Marjolijn; Mendoza, Guillermo F; Dominique, Kathleen C; Baeza, Andres

    2015-01-01

    Managing freshwater resources sustainably under future climatic and hydrological uncertainty poses novel challenges. Rehabilitation of ageing infrastructure and construction of new dams are widely viewed as solutions to diminish climate risk, but attaining the broad goal of freshwater sustainability will require expansion of the prevailing water resources management paradigm beyond narrow economic criteria to include socially valued ecosystem functions and services. We introduce a new decision framework, eco-engineering decision scaling (EEDS), that explicitly and quantitatively explores trade-offs in stakeholder-defined engineering and ecological performance metrics across a range of possible management actions under unknown future hydrological and climate states. We illustrate its potential application through a hypothetical case study of the Iowa River, USA. EEDS holds promise as a powerful framework for operationalizing freshwater sustainability under future hydrological uncertainty by fostering collaboration across historically conflicting perspectives of water resource engineering and river conservation ecology to design and operate water infrastructure for social and environmental benefits.

  13. Systems engineering technology for networks

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The report summarizes research pursued within the Systems Engineering Design Laboratory at Virginia Polytechnic Institute and State University between May 16, 1993 and January 31, 1994. The project was proposed in cooperation with the Computational Science and Engineering Research Center at Howard University. Its purpose was to investigate emerging systems engineering tools and their applicability in analyzing the NASA Network Control Center (NCC) on the basis of metrics and measures.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn D.

    Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less

  15. Nanomanufacturing-related programs at NSF

    NASA Astrophysics Data System (ADS)

    Cooper, Khershed P.

    2015-08-01

    The National Science Foundation is meeting the challenge of transitioning lab-scale nanoscience and technology to commercial-scale through several nanomanufacturing-related research programs. The goal of the core Nanomanufacturing (NM) and the inter-disciplinary Scalable Nanomanufacturing (SNM) programs is to meet the barriers to manufacturability at the nano-scale by developing the fundamental principles for the manufacture of nanomaterials, nanostructures, nanodevices, and engineered nanosystems. These programs address issues such as scalability, reliability, quality, performance, yield, metrics, and cost, among others. The NM and SNM programs seek nano-scale manufacturing ideas that are transformative, that will be widely applicable and that will have far-reaching technological and societal impacts. It is envisioned that the results from these basic research programs will provide the knowledge base for larger programs such as the manufacturing Nanotechnology Science and Engineering Centers (NSECs) and the Nanosystems Engineering Research Centers (NERCs). Besides brief descriptions of these different programs, this paper will include discussions on novel

  16. Moving Metric: Textbooks.

    ERIC Educational Resources Information Center

    Hauck, George F.

    1981-01-01

    Lists engineering textbooks that use SI units. Includes author(s), title, publisher, year, and author's or publisher's comments on the use of the SI units. Books are categorized by topic, such as engineering mechanics, mechanics of materials, fluid mechanics, thermodynamics, structural design, and hydrology. (CS)

  17. Validating the Use of pPerformance Risk Indices for System-Level Risk and Maturity Assessments

    NASA Astrophysics Data System (ADS)

    Holloman, Sherrica S.

    With pressure on the U.S. Defense Acquisition System (DAS) to reduce cost overruns and schedule delays, system engineers' performance is only as good as their tools. Recent literature details a need for 1) objective, analytical risk quantification methodologies over traditional subjective qualitative methods -- such as, expert judgment, and 2) mathematically rigorous system-level maturity assessments. The Mahafza, Componation, and Tippett (2005) Technology Performance Risk Index (TPRI) ties the assessment of technical performance to the quantification of risk of unmet performance; however, it is structured for component- level data as input. This study's aim is to establish a modified TPRI with systems-level data as model input, and then validate the modified index with actual system-level data from the Department of Defense's (DoD) Major Defense Acquisition Programs (MDAPs). This work's contribution is the establishment and validation of the System-level Performance Risk Index (SPRI). With the introduction of the SPRI, system-level metrics are better aligned, allowing for better assessment, tradeoff and balance of time, performance and cost constraints. This will allow system engineers and program managers to ultimately make better-informed system-level technical decisions throughout the development phase.

  18. Texture metric that predicts target detection performance

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.

    2015-12-01

    Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.

  19. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  20. Biological versus electronic adaptive coloration: how can one inform the other?

    PubMed Central

    Kreit, Eric; Mäthger, Lydia M.; Hanlon, Roger T.; Dennis, Patrick B.; Naik, Rajesh R.; Forsythe, Eric; Heikenfeld, Jason

    2013-01-01

    Adaptive reflective surfaces have been a challenge for both electronic paper (e-paper) and biological organisms. Multiple colours, contrast, polarization, reflectance, diffusivity and texture must all be controlled simultaneously without optical losses in order to fully replicate the appearance of natural surfaces and vividly communicate information. This review merges the frontiers of knowledge for both biological adaptive coloration, with a focus on cephalopods, and synthetic reflective e-paper within a consistent framework of scientific metrics. Currently, the highest performance approach for both nature and technology uses colourant transposition. Three outcomes are envisioned from this review: reflective display engineers may gain new insights from millions of years of natural selection and evolution; biologists will benefit from understanding the types of mechanisms, characterization and metrics used in synthetic reflective e-paper; all scientists will gain a clearer picture of the long-term prospects for capabilities such as adaptive concealment and signalling. PMID:23015522

  1. Metrics for Emitter Selection for Multistatic Synthetic Aperture Radar

    DTIC Science & Technology

    2013-09-01

    the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Insitute of Technology Air...130 5.2 Test Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.2.1 Weighting of Criteria...Ratio Test . . . . . . . . . . . . . . . . . . . . 20 CNR Clutter to Noise Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 QNR

  2. Correlation of admissions statistics to graduate student success in medical physics

    PubMed Central

    McSpadden, Erin; Rakowski, Joseph; Nalichowski, Adrian; Yudelev, Mark; Snyder, Michael

    2014-01-01

    The purpose of this work is to develop metrics for evaluation of medical physics graduate student performance, assess relationships between success and other quantifiable factors, and determine whether graduate student performance can be accurately predicted by admissions statistics. A cohort of 108 medical physics graduate students from a single institution were rated for performance after matriculation based on final scores in specific courses, first year graduate Grade Point Average (GPA), performance on the program exit exam, performance in oral review sessions, and faculty rating. Admissions statistics including matriculating program (MS vs. PhD); undergraduate degree type, GPA, and country; graduate degree; general and subject GRE scores; traditional vs. nontraditional status; and ranking by admissions committee were evaluated for potential correlation with the performance metrics. GRE verbal and quantitative scores were correlated with higher scores in the most difficult courses in the program and with the program exit exam; however, the GRE section most correlated with overall faculty rating was the analytical writing section. Students with undergraduate degrees in engineering had a higher faculty rating than those from other disciplines and faculty rating was strongly correlated with undergraduate country. Undergraduate GPA was not statistically correlated with any success metrics investigated in this study. However, the high degree of selection on GPA and quantitative GRE scores during the admissions process results in relatively narrow ranges for these quantities. As such, these results do not necessarily imply that one should not strongly consider traditional metrics, such as undergraduate GPA and quantitative GRE score, during the admissions process. They suggest that once applicants have been initially filtered by these metrics, additional selection should be performed via the other metrics shown here to be correlated with success. The parameters used to make admissions decisions for our program are accurate in predicting student success, as illustrated by the very strong statistical correlation between admissions rank and course average, first year graduate GPA, and faculty rating (p<0.002). Overall, this study indicates that an undergraduate degree in physics should not be considered a fundamental requirement for entry into our program and that within the relatively narrow range of undergraduate GPA and quantitative GRE scores of those admitted into our program, additional variations in these metrics are not important predictors of success. While the high degree of selection on particular statistics involved in the admissions process, along with the relatively small sample size, makes it difficult to draw concrete conclusions about the meaning of correlations here, these results suggest that success in medical physics is based on more than quantitative capabilities. Specifically, they indicate that analytical and communication skills play a major role in student success in our program, as well as predicted future success by program faculty members. Finally, this study confirms that our current admissions process is effective in identifying candidates who will be successful in our program and are expected to be successful after graduation, and provides additional insight useful in improving our admissions selection process. PACS number: 01.40.‐d PMID:24423842

  3. HealthTrust: a social network approach for retrieving online health videos.

    PubMed

    Fernandez-Luque, Luis; Karlsen, Randi; Melton, Genevieve B

    2012-01-31

    Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust's filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r₁₀ = .65, P = .02) and a trend toward significance with health consumers (r₇ = .65, P = .06) with videos on hemoglobinA(1c), but it did not perform as well with diabetic foot videos. The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities.

  4. Real-time performance monitoring and management system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2007-06-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  5. Metric-driven harm: an exploration of unintended consequences of performance measurement.

    PubMed

    Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck

    2013-11-01

    Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.

  6. Mapping suitability areas for concentrated solar power plants using remote sensing data

    DOE PAGES

    Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.

    2015-05-14

    The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less

  7. Indicators and metrics for the assessment of climate engineering

    NASA Astrophysics Data System (ADS)

    Oschlies, A.; Held, H.; Keller, D.; Keller, K.; Mengis, N.; Quaas, M.; Rickels, W.; Schmidt, H.

    2017-01-01

    Selecting appropriate indicators is essential to aggregate the information provided by climate model outputs into a manageable set of relevant metrics on which assessments of climate engineering (CE) can be based. From all the variables potentially available from climate models, indicators need to be selected that are able to inform scientists and society on the development of the Earth system under CE, as well as on possible impacts and side effects of various ways of deploying CE or not. However, the indicators used so far have been largely identical to those used in climate change assessments and do not visibly reflect the fact that indicators for assessing CE (and thus the metrics composed of these indicators) may be different from those used to assess global warming. Until now, there has been little dedicated effort to identifying specific indicators and metrics for assessing CE. We here propose that such an effort should be facilitated by a more decision-oriented approach and an iterative procedure in close interaction between academia, decision makers, and stakeholders. Specifically, synergies and trade-offs between social objectives reflected by individual indicators, as well as decision-relevant uncertainties should be considered in the development of metrics, so that society can take informed decisions about climate policy measures under the impression of the options available, their likely effects and side effects, and the quality of the underlying knowledge base.

  8. Gaining Control and Predictability of Software-Intensive Systems Development and Sustainment

    DTIC Science & Technology

    2015-02-04

    implementation of the baselines, audits , and technical reviews within an overarching systems engineering process (SEP; Defense Acquisition University...warfighters’ needs. This management and metrics effort supplements and supports the system’s technical development through the baselines, audits and...other areas that could be researched and added into the nine-tier model. Areas including software metrics, quality assurance , software-oriented

  9. Performance assessment in brain-computer interface-based augmentative and alternative communication

    PubMed Central

    2013-01-01

    A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems. PMID:23680020

  10. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  11. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  12. Economic Metrics for Commercial Reusable Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Shaw, Eric J.; Hamaker, Joseph (Technical Monitor)

    2000-01-01

    The success of any effort depends upon the effective initial definition of its purpose, in terms of the needs to be satisfied and the goals to be fulfilled. If the desired product is "A System" that is well-characterized, these high-level need and goal statements can be transformed into system requirements by traditional systems engineering techniques. The satisfaction of well-designed requirements can be tracked by fairly straightforward cost, schedule, and technical performance metrics. Unfortunately, some types of efforts, including those that NASA terms "Programs," tend to resist application of traditional systems engineering practices. In the NASA hierarchy of efforts, a "Program" is often an ongoing effort with broad, high-level goals and objectives. A NASA "project" is a finite effort, in terms of budget and schedule, that usually produces or involves one System. Programs usually contain more than one project and thus more than one System. Special care must be taken in the formulation of NASA Programs and their projects, to ensure that lower-level project requirements are traceable to top-level Program goals, feasible with the given cost and schedule constraints, and measurable against top-level goals. NASA Programs and projects are tasked to identify the advancement of technology as an explicit goal, which introduces more complicating factors. The justification for funding of technology development may be based on the technology's applicability to more than one System, Systems outside that Program or even external to NASA. Application of systems engineering to broad-based technology development, leading to effective measurement of the benefits, can be valid, but it requires that potential beneficiary Systems be organized into a hierarchical structure, creating a "system of Systems." In addition, these Systems evolve with the successful application of the technology, which creates the necessity for evolution of the benefit metrics to reflect the changing baseline. Still, economic metrics for technology development in these Programs and projects remain fairly straightforward, being based on reductions in acquisition and operating costs of the Systems. One of the most challenging requirements that NASA levies on its Programs is to plan for the commercialization of the developed technology. Some NASA Programs are created for the express purpose of developing technology for a particular industrial sector, such as aviation or space transportation, in financial partnership with that sector. With industrial investment, another set of goals, constraints and expectations are levied on the technology program. Economic benefit metrics then expand beyond cost and cost savings to include the marketability, profit, and investment return requirements of the private sector. Commercial investment criteria include low risk, potential for high return, and strategic alignment with existing product lines. These corporate criteria derive from top-level strategic plans and investment goals, which rank high among the most proprietary types of information in any business. As a result, top-level economic goals and objectives that industry partners bring to cooperative programs cannot usually be brought into technical processes, such as systems engineering, that are worked collaboratively between Industry and Government. In spite of these handicaps, the top-level economic goals and objectives of a joint technology program can be crafted in such a way that they accurately reflect the fiscal benefits from both Industry and Government perspectives. Valid economic metrics can then be designed that can track progress toward these goals and objectives, while maintaining the confidentiality necessary for the competitive process.

  13. A framework for assessing the uncertainty in wave energy delivery to targeted subsurface formations

    NASA Astrophysics Data System (ADS)

    Karve, Pranav M.; Kallivokas, Loukas F.; Manuel, Lance

    2016-02-01

    Stress wave stimulation of geological formations has potential applications in petroleum engineering, hydro-geology, and environmental engineering. The stimulation can be applied using wave sources whose spatio-temporal characteristics are designed to focus the emitted wave energy into the target region. Typically, the design process involves numerical simulations of the underlying wave physics, and assumes a perfect knowledge of the material properties and the overall geometry of the geostructure. In practice, however, precise knowledge of the properties of the geological formations is elusive, and quantification of the reliability of a deterministic approach is crucial for evaluating the technical and economical feasibility of the design. In this article, we discuss a methodology that could be used to quantify the uncertainty in the wave energy delivery. We formulate the wave propagation problem for a two-dimensional, layered, isotropic, elastic solid truncated using hybrid perfectly-matched-layers (PMLs), and containing a target elastic or poroelastic inclusion. We define a wave motion metric to quantify the amount of the delivered wave energy. We, then, treat the material properties of the layers as random variables, and perform a first-order uncertainty analysis of the formation to compute the probabilities of failure to achieve threshold values of the motion metric. We illustrate the uncertainty quantification procedure using synthetic data.

  14. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  15. A Multimetric Approach for Handoff Decision in Heterogeneous Wireless Networks

    NASA Astrophysics Data System (ADS)

    Kustiawan, I.; Purnama, W.

    2018-02-01

    Seamless mobility and service continuity anywhere at any time are an important issue in the wireless Internet. This research proposes a scheme to make handoff decisions effectively in heterogeneous wireless networks using a fuzzy system. Our design lies in an inference engine which takes RSS (received signal strength), data rate, network latency, and user preference as strategic determinants. The logic of our engine is realized on a UE (user equipment) side in faster reaction to network dynamics while roaming across different radio access technologies. The fuzzy system handles four metrics jointly to deduce a moderate decision about when to initiate handoff. The performance of our design is evaluated by simulating move-out mobility scenarios. Simulation results show that our scheme outperforms other approaches in terms of reducing unnecessary handoff.

  16. Nuclear Thermal Propulsion Mars Mission Systems Analysis and Requirements Definition

    NASA Technical Reports Server (NTRS)

    Mulqueen, Jack; Chiroux, Robert C.; Thomas, Dan; Crane, Tracie

    2007-01-01

    This paper describes the Mars transportation vehicle design concepts developed by the Marshall Space Flight Center (MSFC) Advanced Concepts Office. These vehicle design concepts provide an indication of the most demanding and least demanding potential requirements for nuclear thermal propulsion systems for human Mars exploration missions from years 2025 to 2035. Vehicle concept options vary from large "all-up" vehicle configurations that would transport all of the elements for a Mars mission on one vehicle. to "split" mission vehicle configurations that would consist of separate smaller vehicles that would transport cargo elements and human crew elements to Mars separately. Parametric trades and sensitivity studies show NTP stage and engine design options that provide the best balanced set of metrics based on safety, reliability, performance, cost and mission objectives. Trade studies include the sensitivity of vehicle performance to nuclear engine characteristics such as thrust, specific impulse and nuclear reactor type. Tbe associated system requirements are aligned with the NASA Exploration Systems Mission Directorate (ESMD) Reference Mars mission as described in the Explorations Systems Architecture Study (ESAS) report. The focused trade studies include a detailed analysis of nuclear engine radiation shield requirements for human missions and analysis of nuclear thermal engine design options for the ESAS reference mission.

  17. Performance Metrics for Liquid Chromatography-Tandem Mass Spectrometry Systems in Proteomics Analyses*

    PubMed Central

    Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.

    2010-01-01

    A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981

  18. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  19. Simulator of Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Clare, Loren; Jennings, Esther; Gao, Jay; Segui, John; Kwong, Winston

    2005-01-01

    Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) is a suite of software tools that simulates the behaviors of communication networks to be used in space exploration, and predict the performance of established and emerging space communication protocols and services. MACHETE consists of four general software systems: (1) a system for kinematic modeling of planetary and spacecraft motions; (2) a system for characterizing the engineering impact on the bandwidth and reliability of deep-space and in-situ communication links; (3) a system for generating traffic loads and modeling of protocol behaviors and state machines; and (4) a system of user-interface for performance metric visualizations. The kinematic-modeling system makes it possible to characterize space link connectivity effects, including occultations and signal losses arising from dynamic slant-range changes and antenna radiation patterns. The link-engineering system also accounts for antenna radiation patterns and other phenomena, including modulations, data rates, coding, noise, and multipath fading. The protocol system utilizes information from the kinematic-modeling and link-engineering systems to simulate operational scenarios of space missions and evaluate overall network performance. In addition, a Communications Effect Server (CES) interface for MACHETE has been developed to facilitate hybrid simulation of space communication networks with actual flight/ground software/hardware embedded in the overall system.

  20. Performance regression manager for large scale systems

    DOEpatents

    Faraj, Daniel A.

    2017-10-17

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  1. Software Reporting Metrics. Revision 2.

    DTIC Science & Technology

    1985-11-01

    MITRE Corporation and ESD. Some of the data has been obtained from Dr. Barry Boehm’s Software Engineering Economics (Ref. 1). Thanks are also given to...data level control management " SP = structured programming Barry W. Boehm, Software Engineering Economics, &©1981, p. 122. Reprinted by permission of...investigated and implemented in future prototypes. 43 REFERENCES For further reading: " 1. Boehm, Barry W. Software Engineering Economics; Englewood

  2. Robotics Laboratory to Enhance the STEM Research Experience

    DTIC Science & Technology

    2015-04-30

    the Chemistry Program has a student working on the design and development of a Stirling Engine , which the student is planning to construct using...scale): Number of graduating undergraduates funded by a DoD funded Center of Excellence grant for Education, Research and Engineering : The number of... engineering or technology fields: Student Metrics This section only applies to graduating undergraduates supported by this agreement in this reporting

  3. Towards the XML schema measurement based on mapping between XML and OO domain

    NASA Astrophysics Data System (ADS)

    Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja

    2017-07-01

    Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.

  4. Optical performance and metallic absorption in nanoplasmonic systems.

    PubMed

    Arnold, Matthew D; Blaber, Martin G

    2009-03-02

    Optical metrics relating to metallic absorption in representative plasmonic systems are surveyed, with a view to developing heuristics for optimizing performance over a range of applications. We use the real part of the permittivity as the independent variable; consider strengths of particle resonances, resolving power of planar lenses, and guiding lengths of planar waveguides; and compare nearly-free-electron metals including Al, Cu, Ag, Au, Li, Na, and K. Whilst the imaginary part of metal permittivity has a strong damping effect, field distribution is equally important and thus factors including geometry, real permittivity and frequency must be considered when selecting a metal. Al performs well at low permittivities (e.g. sphere resonances, superlenses) whereas Au & Ag only perform well at very negative permittivities (shell and rod resonances, LRSPP). The alkali metals perform well overall but present engineering challenges.

  5. Metric analysis and data validation across FORTRAN projects

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.; Phillips, Tsai-Yun

    1983-01-01

    The desire to predict the effort in developing or explaining the quality of software has led to the proposal of several metrics. As a step toward validating these metrics, the Software Engineering Laboratory (SEL) has analyzed the software science metrics, cyclomatic complexity, and various standard program measures for their relation to effort (including design through acceptance testing), development errors (both discrete and weighted according to the amount of time to locate and fix), and one another. The data investigated are collected from a project FORTRAN environment and examined across several projects at once, within individual projects and by reporting accuracy checks demonstrating the need to validate a database. When the data comes from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither software science's E metric cyclomatic complexity not source lines of code appears to relate convincingly better with effort than the others.

  6. Zone calculation as a tool for assessing performance outcome in laparoscopic suturing.

    PubMed

    Buckley, Christina E; Kavanagh, Dara O; Nugent, Emmeline; Ryan, Donncha; Traynor, Oscar J; Neary, Paul C

    2015-06-01

    Simulator performance is measured by metrics, which are valued as an objective way of assessing trainees. Certain procedures such as laparoscopic suturing, however, may not be suitable for assessment under traditionally formulated metrics. Our aim was to assess if our new metric is a valid method of assessing laparoscopic suturing. A software program was developed to order to create a new metric, which would calculate the percentage of time spent operating within pre-defined areas called "zones." Twenty-five candidates (medical students N = 10, surgical residents N = 10, and laparoscopic experts N = 5) performed the laparoscopic suturing task on the ProMIS III(®) simulator. New metrics of "in-zone" and "out-zone" scores as well as traditional metrics of time, path length, and smoothness were generated. Performance was also assessed by two blinded observers using the OSATS and FLS rating scales. This novel metric was evaluated by comparing it to both traditional metrics and subjective scores. There was a significant difference in the average in-zone and out-zone scores between all three experience groups (p < 0.05). The new zone metrics scores correlated significantly with the subjective-blinded observer scores of OSATS and FLS (p = 0.0001). The new zone metric scores also correlated significantly with the traditional metrics of path length, time, and smoothness (p < 0.05). The new metric is a valid tool for assessing laparoscopic suturing objectively. This could be incorporated into a competency-based curriculum to monitor resident progression in the simulated setting.

  7. The Validation by Measurement Theory of Proposed Object-Oriented Software Metrics

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    Moving software development into the engineering arena requires controllability, and to control a process, it must be measurable. Measuring the process does no good if the product is not also measured, i.e., being the best at producing an inferior product does not define a quality process. Also, not every number extracted from software development is a valid measurement. A valid measurement only results when we are able to verify that the number is representative of the attribute that we wish to measure. Many proposed software metrics are used by practitioners without these metrics ever having been validated, leading to costly but often useless calculations. Several researchers have bemoaned the lack of scientific precision in much of the published software measurement work and have called for validation of software metrics by measurement theory. This dissertation applies measurement theory to validate fifty proposed object-oriented software metrics.

  8. A bridge role metric model for nodes in software networks.

    PubMed

    Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe

    2014-01-01

    A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.

  9. A Bridge Role Metric Model for Nodes in Software Networks

    PubMed Central

    Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe

    2014-01-01

    A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices. PMID:25364938

  10. Converting the ISS to an Earth-Moon Transport System Using Nuclear Thermal Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paniagua, John; Maise, George; Powell, James

    2008-01-21

    Using Nuclear Thermal Propulsion (NTP), the International Space Station (ISS) can be placed into a cyclic orbit between the Earth and the Moon for 2-way transport of personnel and supplies to a permanent Moon Base. The ISS cycler orbit apogees 470,000 km from Earth, with a period of 13.66 days. Once a month, the ISS would pass close to the Moon, enabling 2-way transport between it and the surface using a lunar shuttle craft. The lunar shuttle craft would land at a desired location on the surface during a flyby and return to the ISS during a later flyby. Atmore » Earth perigee 7 days later at 500 km altitude, there would be 2-way transport between it and Earth's surface using an Earth shuttle craft. The docking Earth shuttle would remain attached to the ISS as it traveled towards the Moon, while personnel and supplies transferred to a lunar shuttle spacecraft that would detach and land at the lunar base when the ISS swung around the Moon. The reverse process would be carried out to return personnel and materials from the Moon to the Earth. The orbital mechanics for the ISS cycle are described in detail. Based on the full-up mass of 400 metric tons for the ISS, an ISP of 900 seconds, and a delta V burn of 3.3 km/sec to establish the orbit, 200 metric tons of liquid H-2 propellant would be required. The 200 metric tons could be stored in 3 tanks, each 8 meters in diameter and 20 meters in length. An assembly of 3 MITEE NTP engines would be used, providing redundancy if an engine were to fail. Two different MITEE design options are described. Option 1 is an 18,000 Newton, 100 MW engine with a thrust to weight ratio of 6.6/1; Option 2 is a 180,000 Newton, 1000 MW engine with a thrust to weight ratio of 23/1. Burn times to establish the orbit are {approx}1 hour for the large 3 engine assembly, and 10 hours for the small 3 engine assembly. Both engines would use W-UO2 cermet fuel at {approx}2750 K which has demonstrated the capability to operate for at least 50 hours in 2750 K hydrogen with only a minor loss of fuel material. The small engine is favored because of its lower weight. The total system weight of the small 3 engine assembly is {approx}12 metric tons, including engine, controls, pumps, and neutron and gamma shields. After their main thrust operation, the NTP engines would shut down, with periodic successive smaller delta V burns as required to fine-tune the cycler orbit. Radiation dosages to personnel, both during operation and after shutdown, are much smaller than those from the cosmic ray background.« less

  11. The structural approach to shared knowledge: an application to engineering design teams.

    PubMed

    Avnet, Mark S; Weigel, Annalisa L

    2013-06-01

    We propose a methodology for analyzing shared knowledge in engineering design teams. Whereas prior work has focused on shared knowledge in small teams at a specific point in time, the model presented here is both scalable and dynamic. By quantifying team members' common views of design drivers, we build a network of shared mental models to reveal the structure of shared knowledge at a snapshot in time. Based on a structural comparison of networks at different points in time, a metric of change in shared knowledge is computed. Analysis of survey data from 12 conceptual space mission design sessions reveals a correlation between change in shared knowledge and each of several system attributes, including system development time, system mass, and technological maturity. From these results, we conclude that an early period of learning and consensus building could be beneficial to the design of engineered systems. Although we do not examine team performance directly, we demonstrate that shared knowledge is related to the technical design and thus provide a foundation for improving design products by incorporating the knowledge and thoughts of the engineering design team into the process.

  12. Raman fiberoptic probe for monitoring human tissue engineered oral mucosa constructs

    NASA Astrophysics Data System (ADS)

    Khmaladze, Alexander; Kuo, Shiuhyang; Okagbare, Paul; Marcelo, Cynthia L.; Feinberg, Stephen E.; Morris, Michael D.

    2013-02-01

    In oral and maxillofacial surgery, there is a need for tissue engineered constructs for dental implants, reconstructions due to trauma, oral cancer or congenital defects. A non-invasive quality monitoring of the fabrication of tissue engineered constructs during their production and implantation is a required component of any successful tissue engineering technique. We demonstrate the design and application of a Raman spectroscopic probe for rapid and noninvasive monitoring of Ex Vivo Produced Oral Mucosa Equivalent constructs (EVPOMEs). We conducted in vivo studies to identify Raman spectroscopic failure indicators for EVPOMEs (already developed in vitro), and found that Raman spectra of EVPOMEs exposed to thermal stress showed correlation of the band height ratio of CH2 deformation to phenylalanine ring breathing modes, providing a Raman metric to distinguish between viable and nonviable constructs. This is the first step towards the ultimate goal to design a stand-alone system, which will be usable in a clinical setting, as the data processing and analysis will be performed with minimal user intervention, based on already established and tested Raman spectroscopic indicators for EVPOMEs.

  13. Role of Biocatalysis in Sustainable Chemistry.

    PubMed

    Sheldon, Roger A; Woodley, John M

    2018-01-24

    Based on the principles and metrics of green chemistry and sustainable development, biocatalysis is both a green and sustainable technology. This is largely a result of the spectacular advances in molecular biology and biotechnology achieved in the past two decades. Protein engineering has enabled the optimization of existing enzymes and the invention of entirely new biocatalytic reactions that were previously unknown in Nature. It is now eminently feasible to develop enzymatic transformations to fit predefined parameters, resulting in processes that are truly sustainable by design. This approach has successfully been applied, for example, in the industrial synthesis of active pharmaceutical ingredients. In addition to the use of protein engineering, other aspects of biocatalysis engineering, such as substrate, medium, and reactor engineering, can be utilized to improve the efficiency and cost-effectiveness and, hence, the sustainability of biocatalytic reactions. Furthermore, immobilization of an enzyme can improve its stability and enable its reuse multiple times, resulting in better performance and commercial viability. Consequently, biocatalysis is being widely applied in the production of pharmaceuticals and some commodity chemicals. Moreover, its broader application will be further stimulated in the future by the emerging biobased economy.

  14. Minimum Climb to Cruise Noise Trajectories Modeled for the High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    1998-01-01

    The proposed U.S. High Speed Civil Transport (HSCT) will revolutionize commercial air travel by providing economical supersonic passenger service to destinations worldwide. Unlike the high-bypass turbofan engines that propel today's subsonic airliners, HSCT engines will have much higher jet exhaust speeds. Jet noise, caused by the turbulent mixing of high-speed exhaust with the surrounding air, poses a significant challenge for HSCT engine designers. To resolve this challenge, engineers have designed advanced mixer rejector nozzles that reduce HSCT jet noise to airport noise certification levels by entraining and mixing large quantities of ambient air with the engines' jet streams. Although this works well during the first several minutes of flight, far away from the airport, as the HSCT gains speed and climbs, poor ejector inlet recovery and ejector ram drag contribute to poor thrust, making it advantageous to turn off the ejector. Doing so prematurely, however, can cause unacceptable noise levels to propagate to the ground, even when the aircraft is many miles from the airport. This situation lends itself ideally to optimization, where the aircraft trajectory, throttle setting, and ejector setting can be varied (subject to practical aircraft constraints) to minimize the noise propagated to the ground. A method was developed at the NASA Lewis Research Center that employs a variation of the classic energy state approximation: a trajectory analysis technique historically used to minimize climb time or fuel burned in many aircraft problems. To minimize the noise on the ground at any given throttle setting, high aircraft altitudes are desirable; but the HSCT may either climb quickly to high altitudes using a high, noisy throttle setting or climb more slowly at a lower, quieter throttle setting. An optimizer has been programmed into NASA's existing aircraft and noise analysis codes to balance these options by dynamically choosing the best altitude-velocity path and throttle setting history. The noise level standard, or metric, used in the optimizer should be one that accurately reflects the subjective annoyance levels of ground-based observers under the flight path. A variety of noise metrics are available, many of which are practical for airport-vicinity noise certification. Unlike airport noise, however, the HSCT's climb noise will be characterized by relatively low noise levels, long durations, and low-frequency spectra. The noise metrics used in these calculations are based on the recommendations of researchers at the NASA Langley Research Center, who have correlated the flyover noise annoyance levels of actual laboratory subjects with a variety of measurements. Analysis of data from this optimizer has shown that significant reductions in noise may be obtained with trajectory optimization. And since throttling operations are performed in the subsonic portion of the climb path (where thrust is plentiful), only small penalties in HSCT range or fuel performance occur.

  15. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  16. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  17. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    PubMed

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.V.

    This book reports on remedial measures for gas wells and new methods for calculating the position of the stabilized performance curves for gas wells as well as the heating value for natural gases from compositional analyses. In addition, the author includes problem solutions in an appendix and a section showing the relation between the conventional empirical equation and the theoretical performance equation of A.S. Odeh. The author successfully bridges the gap between the results of empirical testing and the theory of unsteady-state flow in porous media. It strengthens the bond between conventional reservoir engineering practices and understanding gas well behavior.more » Problems listed at the end of each chapter are excellent exercises for practitioners. This book provides information on: Natural Gas Engineering; Properties of natural gas; Application of gas laws to reservoir engineering; Gas measurement; Flow of natural gas in circular pipe and annular conductors; Flow of gas in porous media (a review); Gas well testing; Unsteady-state flow behavior of gas wells; Production forecasting for gas wells; Production decline curves for gas wells; Sizing flow strings for gas wells; Remedial measures for gas wells; Gas sales contracts; and appendices on Compressibility for natural gas, Gas measurement factors, SI metric conversion factors, and Solutions to problems.« less

  19. Implications of Transitioning from De Facto to Engineered Water Reuse for Power Plant Cooling.

    PubMed

    Barker, Zachary A; Stillwell, Ashlynn S

    2016-05-17

    Thermoelectric power plants demand large quantities of cooling water, and can use alternative sources like treated wastewater (reclaimed water); however, such alternatives generate many uncertainties. De facto water reuse, or the incidental presence of wastewater effluent in a water source, is common at power plants, representing baseline conditions. In many cases, power plants would retrofit open-loop systems to cooling towers to use reclaimed water. To evaluate the feasibility of reclaimed water use, we compared hydrologic and economic conditions at power plants under three scenarios: quantified de facto reuse, de facto reuse with cooling tower retrofits, and modeled engineered reuse conditions. We created a genetic algorithm to estimate costs and model optimal conditions. To assess power plant performance, we evaluated reliability metrics for thermal variances and generation capacity loss as a function of water temperature. Applying our analysis to the greater Chicago area, we observed high de facto reuse for some power plants and substantial costs for retrofitting to use reclaimed water. Conversely, the gains in reliability and performance through engineered reuse with cooling towers outweighed the energy investment in reclaimed water pumping. Our analysis yields quantitative results of reclaimed water feasibility and can inform sustainable management of water and energy.

  20. Rapid Object Detection Systems, Utilising Deep Learning and Unmanned Aerial Systems (uas) for Civil Engineering Applications

    NASA Astrophysics Data System (ADS)

    Griffiths, D.; Boehm, J.

    2018-05-01

    With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.

  1. 75 FR 7581 - RTO/ISO Performance Metrics; Notice Requesting Comments on RTO/ISO Performance Metrics

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... of staff from all the jurisdictional ISOs/RTOs to develop a set of performance metrics that the ISOs/RTOs will use to report annually to the Commission. Commission staff and representatives from the ISOs...

  2. Performance regression manager for large scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less

  3. Performance regression manager for large scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less

  4. NASA Conducts First RS-25 Rocket Engine Test of 2015

    NASA Image and Video Library

    2015-01-09

    From the Press Release: The new year is off to a hot start for NASA's Space Launch System (SLS). The engine that will drive America's next great rocket to deep space blazed through its first successful test Jan. 9 at the agency's Stennis Space Center near Bay St. Louis, Mississippi. The RS-25, formerly the space shuttle main engine, fired up for 500 seconds on the A-1 test stand at Stennis, providing NASA engineers critical data on the engine controller unit and inlet pressure conditions. This is the first hot fire of an RS-25 engine since the end of space shuttle main engine testing in 2009. Four RS-25 engines will power SLS on future missions, including to an asteroid and Mars. "We’ve made modifications to the RS-25 to meet SLS specifications and will analyze and test a variety of conditions during the hot fire series,” said Steve Wofford, manager of the SLS Liquid Engines Office at NASA's Marshall Space Flight Center in Huntsville, Alabama, where the SLS Program is managed. "The engines for SLS will encounter colder liquid oxygen temperatures than shuttle; greater inlet pressure due to the taller core stage liquid oxygen tank and higher vehicle acceleration; and more nozzle heating due to the four-engine configuration and their position in-plane with the SLS booster exhaust nozzles.” The engine controller unit, the "brain" of the engine, allows communication between the vehicle and the engine, relaying commands to the engine and transmitting data back to the vehicle. The controller also provides closed-loop management of the engine by regulating the thrust and fuel mixture ratio while monitoring the engine's health and status. The new controller will use updated hardware and software configured to operate with the new SLS avionics architecture. "This first hot-fire test of the RS-25 engine represents a significant effort on behalf of Stennis Space Center’s A-1 test team," said Ronald Rigney, RS-25 project manager at Stennis. "Our technicians and engineers have been working diligently to design, modify and activate an extremely complex and capable facility in support of RS-25 engine testing." Testing will resume in April after upgrades are completed on the high pressure industrial water system, which provides cool water for the test facility during a hot fire test. Eight tests, totaling 3,500 seconds, are planned for the current development engine. Another development engine later will undergo 10 tests, totaling 4,500 seconds. The second test series includes the first test of new flight controllers, known as green running. The first flight test of the SLS will feature a configuration for a 70-metric-ton (77-ton) lift capacity and carry an uncrewed Orion spacecraft beyond low-Earth orbit to test the performance of the integrated system. As the SLS is upgraded, it will provide an unprecedented lift capability of 130 metric tons (143 tons) to enable missions even farther into our solar system.

  5. A Validation Metrics Framework for Safety-Critical Software-Intensive Systems

    DTIC Science & Technology

    2009-03-01

    so does its definition, tools, and techniques, including means for measuring the validation activity, its outputs, and impact on development...independent of the SDLP. When considering the above SDLPs from the safety engineering team’s perspective, there are also large impacts on the way... impact . Interpretation of any actionable metric data will need to be undertaken in the context of the SDLP. 2. Safety Input The software safety

  6. Analysis of Turbofan Design Options for an Advanced Single-Aisle Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Guynn, Mark D.; Berton, Jeffrey J.; Fisher, Kenneth L.; Haller, William J.; Tong, Michael T.; Thurman, Douglas R.

    2009-01-01

    The desire for higher engine efficiency has resulted in the evolution of aircraft gas turbine engines from turbojets, to low bypass ratio, first generation turbofans, to today's high bypass ratio turbofans. It is possible that future designs will continue this trend, leading to very-high or ultra-high bypass ratio (UHB) engines. Although increased bypass ratio has clear benefits in terms of propulsion system metrics such as specific fuel consumption, these benefits may not translate into aircraft system level benefits due to integration penalties. In this study, the design trade space for advanced turbofan engines applied to a single-aisle transport (737/A320 class aircraft) is explored. The benefits of increased bypass ratio and associated enabling technologies such as geared fan drive are found to depend on the primary metrics of interest. For example, bypass ratios at which fuel consumption is minimized may not require geared fan technology. However, geared fan drive does enable higher bypass ratio designs which result in lower noise. Regardless of the engine architecture chosen, the results of this study indicate the potential for the advanced aircraft to realize substantial improvements in fuel efficiency, emissions, and noise compared to the current vehicles in this size class.

  7. Fuel Effects on Ignition and Their Impact on Advanced Combustion Engines (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.; Li, H.; Neill, S.

    The objective of this report is to develop a pathway to use easily measured ignition properties as metrics for characterizing fuels in advanced combustion engine research--correlate IQT{trademark} measured parameters with engine data. In HCCL engines, ignition timing depends on the reaction rates throughout compression stroke: need to understand sensitivity to T, P, and [O{sub 2}]; need to rank fuels based on more than one set of conditions; and need to understand how fuel composition (molecular species) affect ignition properties.

  8. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  9. Uncooperative target-in-the-loop performance with backscattered speckle-field effects

    NASA Astrophysics Data System (ADS)

    Kansky, Jan E.; Murphy, Daniel V.

    2007-09-01

    Systems utilizing target-in-the-loop (TIL) techniques for adaptive optics phase compensation rely on a metric sensor to perform a hill climbing algorithm that maximizes the far-field Strehl ratio. In uncooperative TIL, the metric signal is derived from the light backscattered from a target. In cases where the target is illuminated with a laser with suffciently long coherence length, the potential exists for the validity of the metric sensor to be compromised by speckle-field effects. We report experimental results from a scaled laboratory designed to evaluate TIL performance in atmospheric turbulence and thermal blooming conditions where the metric sensors are influenced by varying degrees of backscatter speckle. We compare performance of several TIL configurations and metrics for cases with static speckle, and for cases with speckle fluctuations within the frequency range that the TIL system operates. The roles of metric sensor filtering and system bandwidth are discussed.

  10. Impact of Different Economic Performance Metrics on the Perceived Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2011-10-01

    Photovoltaic (PV) systems are installed by several types of market participants, ranging from residential customers to large-scale project developers and utilities. Each type of market participant frequently uses a different economic performance metric to characterize PV value because they are looking for different types of returns from a PV investment. This report finds that different economic performance metrics frequently show different price thresholds for when a PV investment becomes profitable or attractive. Several project parameters, such as financing terms, can have a significant impact on some metrics [e.g., internal rate of return (IRR), net present value (NPV), and benefit-to-cost (B/C)more » ratio] while having a minimal impact on other metrics (e.g., simple payback time). As such, the choice of economic performance metric by different customer types can significantly shape each customer's perception of PV investment value and ultimately their adoption decision.« less

  11. An exploratory survey of methods used to develop measures of performance

    NASA Astrophysics Data System (ADS)

    Hamner, Kenneth L.; Lafleur, Charles A.

    1993-09-01

    Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.

  12. Specification and implementation of IFC based performance metrics to support building life cycle assessment of hybrid energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrissey, Elmer; O'Donnell, James; Keane, Marcus

    2004-03-29

    Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevik, James; Pamminger, Michael; Wallner, Thomas

    Interest in natural gas as an alternative fuel source to petroleum fuels for light-duty vehicle applications has increased due to its domestic availability and stable price compared to gasoline. With its higher hydrogen-to-carbon ratio, natural gas has the potential to reduce engine out carbon dioxide emissions, which has shown to be a strong greenhouse gas contributor. For part-load conditions, the lower flame speeds of natural gas can lead to an increased duration in the inflammation process with traditional port-injection. Direct-injection of natural gas can increase in-cylinder turbulence and has the potential to reduce problems typically associated with port-injection of naturalmore » gas, such as lower flame speeds and poor dilution tolerance. A study was designed and executed to investigate the effects of direct-injection of natural gas at part-load conditions. Steady-state tests were performed on a single-cylinder research engine representative of current gasoline direct-injection engines. Tests were performed with direct-injection in the central and side location. The start of injection was varied under stoichiometric conditions in order to study the effects on the mixture formation process. In addition, exhaust gas recirculation was introduced at select conditions in order to investigate the dilution tolerance. Relevant combustion metrics were then analyzed for each scenario. Experimental results suggest that regardless of the injector location, varying the start of injection has a strong impact on the mixture formation process. Delaying the start of injection from 300 to 120°CA BTDC can reduce the early flame development process by nearly 15°CA. While injecting into the cylinder after the intake valves have closed has shown to produce the fastest combustion process, this does not necessarily lead to the highest efficiency, due to increases in pumping and wall heat losses. When comparing the two injection configurations, the side location shows the best performance in terms of combustion metrics and efficiencies. For both systems, part-load dilution tolerance is affected by the injection timing, due to the induced turbulence from the gaseous injection event. CFD simulation results have shown that there is a fundamental difference in how the two injection locations affect the mixture formation process. Delayed injection timing increases the turbulence level in the cylinder at the time of the spark, but reduces the available time for proper mixing. Side injection delivers a gaseous jet that interacts more effectively with the intake induced flow field, and this improves the engine performance in terms of efficiency.« less

  14. Electrochemical Positioning of Ordered Nanostructures

    DTIC Science & Technology

    2016-04-26

    or technology fields : Student Metrics This section only applies to graduating undergraduates supported by this agreement in this reporting period The...funded by this agreement who graduated during this period with a degree in science, mathematics, engineering, or technology fields : The number of...engineering, or technology fields :...... ...... ...... ...... ...... PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: PERCENT_SUPPORTEDNAME FTE

  15. Electric Propulsion Performance from Geo-transfer to Geosynchronous Orbits

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.; Carpenter, Christian B.

    2007-01-01

    For near-Earth application, solar electric propulsion advocates have focused on Low Earth Orbit (LEO) to Geosynchronous (GEO) low-thrust transfers because of the significant improvement in capability over chemical alternatives. While the performance gain attained from starting with a lower orbit is large, there are also increased transfer times and radiation exposure risk that has hindered the commercial advocacy for electric propulsion stages. An incremental step towards electric propulsion stages is the use of integrated solar electric propulsion systems (SEPS) for GTO to GEO transfer. Thorough analyses of electric propulsion systems options and performance are presented. Results are based on existing or near-term capabilities of Arcjets, Hall thrusters, and Gridded Ion engines. Parametric analyses based on "rubber" thruster and launch site metrics are also provided.

  16. Engine Concept Study for an Advanced Single-Aisle Transport

    NASA Technical Reports Server (NTRS)

    Guynn, Mark D.; Berton, Jeffrey J.; Fisher, Kenneth L.; Haller, William J.; Tong, Michael; Thurman, Douglas R.

    2009-01-01

    The desire for higher engine efficiency has resulted in the evolution of aircraft gas turbine engines from turbojets, to low bypass ratio, first generation turbofans, to today's high bypass ratio turbofans. Although increased bypass ratio has clear benefits in terms of propulsion system metrics such as specific fuel consumption, these benefits may not translate into aircraft system level benefits due to integration penalties. In this study, the design trade space for advanced turbofan engines applied to a single aisle transport (737/A320 class aircraft) is explored. The benefits of increased bypass ratio and associated enabling technologies such as geared fan drive are found to depend on the primary metrics of interest. For example, bypass ratios at which mission fuel consumption is minimized may not require geared fan technology. However, geared fan drive does enable higher bypass ratio designs which result in lower noise. The results of this study indicate the potential for the advanced aircraft to realize substantial improvements in fuel efficiency, emissions, and noise compared to the current vehicles in this size class.

  17. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  18. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  19. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  20. Assessing Spontaneous Combustion Instability with Recurrence Quantification Analysis

    NASA Technical Reports Server (NTRS)

    Eberhart, Chad J.; Casiano, Matthew J.

    2016-01-01

    Spontaneous instabilities can pose a significant challenge to verification of combustion stability, and characterizing its onset is an important avenue of improvement for stability assessments of liquid propellant rocket engines. Recurrence Quantification Analysis (RQA) is used here to explore nonlinear combustion dynamics that might give insight into instability. Multiple types of patterns representative of different dynamical states are identified within fluctuating chamber pressure data, and markers for impending instability are found. A class of metrics which describe these patterns is also calculated. RQA metrics are compared with and interpreted against another metric from nonlinear time series analysis, the Hurst exponent, to help better distinguish between stable and unstable operation.

  1. Devising tissue ingrowth metrics: a contribution to the computational characterization of engineered soft tissue healing.

    PubMed

    Alves, Antoine; Attik, Nina; Bayon, Yves; Royet, Elodie; Wirth, Carine; Bourges, Xavier; Piat, Alexis; Dolmazon, Gaëlle; Clermont, Gaëlle; Boutrand, Jean-Pierre; Grosgogeat, Brigitte; Gritsch, Kerstin

    2018-03-14

    The paradigm shift brought about by the expansion of tissue engineering and regenerative medicine away from the use of biomaterials, currently questions the value of histopathologic methods in the evaluation of biological changes. To date, the available tools of evaluation are not fully consistent and satisfactory for these advanced therapies. We have developed a new, simple and inexpensive quantitative digital approach that provides key metrics for structural and compositional characterization of the regenerated tissues. For example, metrics provide the tissue ingrowth rate (TIR) which integrates two separate indicators; the cell ingrowth rate (CIR) and the total collagen content (TCC) as featured in the equation, TIR% = CIR% + TCC%. Moreover a subset of quantitative indicators describing the directional organization of the collagen (relating structure and mechanical function of tissues), the ratio of collagen I to collagen III (remodeling quality) and the optical anisotropy property of the collagen (maturity indicator) was automatically assessed as well. Using an image analyzer, all metrics were extracted from only two serial sections stained with either Feulgen & Rossenbeck (cell specific) or Picrosirius Red F3BA (collagen specific). To validate this new procedure, three-dimensional (3D) scaffolds were intraperitoneally implanted in healthy and in diabetic rats. It was hypothesized that quantitatively, the healing tissue would be significantly delayed and of poor quality in diabetic rats in comparison to healthy rats. In addition, a chemically modified 3D scaffold was similarly implanted in a third group of healthy rats with the assumption that modulation of the ingrown tissue would be quantitatively present in comparison to the 3D scaffold-healthy group. After 21 days of implantation, both hypotheses were verified by use of this novel computerized approach. When the two methods were run in parallel, the quantitative results revealed fine details and differences not detected by the semi-quantitative assessment, demonstrating the importance of quantitative analysis in the performance evaluation of soft tissue healing. This automated and supervised method reduced operator dependency and proved to be simple, sensitive, cost-effective and time-effective. It supports objective therapeutic comparisons and helps to elucidate regeneration and the dynamics of a functional tissue.

  2. Accounting for regional variation in both natural environment and human disturbance to improve performance of multimetric indices of lotic benthic diatoms.

    PubMed

    Tang, Tao; Stevenson, R Jan; Infante, Dana M

    2016-10-15

    Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  4. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  5. HealthTrust: A Social Network Approach for Retrieving Online Health Videos

    PubMed Central

    Karlsen, Randi; Melton, Genevieve B

    2012-01-01

    Background Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. Objectives To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. Methods We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. Results HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust’s filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r 10 = .65, P = .02) and a trend toward significance with health consumers (r 7 = .65, P = .06) with videos on hemoglobinA1 c, but it did not perform as well with diabetic foot videos. Conclusions The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities. PMID:22356723

  6. SURF: Taking Sustainable Remediation from Concept to Standard Operating Procedure (Invited)

    NASA Astrophysics Data System (ADS)

    Smith, L. M.; Wice, R. B.; Torrens, J.

    2013-12-01

    Over the last decade, many sectors of industrialized society have been rethinking behavior and re-engineering practices to reduce consumption of energy and natural resources. During this time, green and sustainable remediation (GSR) has evolved from conceptual discussions to standard operating procedure for many environmental remediation practitioners. Government agencies and private sector entities have incorporated GSR metrics into their performance criteria and contracting documents. One of the early think tanks for the development of GSR was the Sustainable Remediation Forum (SURF). SURF brings together representatives of government, industry, consultancy, and academia to parse the means and ends of incorporating societal and economic considerations into environmental cleanup projects. Faced with decades-old treatment programs with high energy outputs and no endpoints in sight, a small group of individuals published the institutional knowledge gathered in two years of ad hoc meetings into a 2009 White Paper on sustainable remediation drivers, practices, objectives, and case studies. Since then, SURF has expanded on those introductory topics, publishing its Framework for Integrating Sustainability into Remediation Projects, Guidance for Performing Footprint Analyses and Life-Cycle Assessments for the Remediation Industry, a compendium of metrics, and a call to improve the integration of land remediation and reuse. SURF's research and members have also been instrumental in the development of additional guidance through ASTM International and the Interstate Technology and Regulatory Council. SURF's current efforts focus on water reuse, the international perspective on GSR (continuing the conversations that were the basis of SURF's December 2012 meeting at the National Academy of Sciences in Washington, DC), and ways to capture and evaluate the societal benefits of site remediation. SURF also promotes and supports student chapters at universities across the US, encouraging the incorporation of sustainability concepts into environmental science and engineering in undergraduate curricula and graduate research, and student participation at professional conferences. This presentation will provide an overview of the evolution of GSR to-date and a history of SURF's technical and outreach work. Examples will be provided--using both qualitative and quantitative metrics--that document and support the benefits of GSR.

  7. Mineral resource of the month: rhenium

    USGS Publications Warehouse

    Polyak, Désirée E.

    2012-01-01

    Rhenium, a silvery-white, heat resistant metal, has increased significantly in importance since its discovery in 1925. First isolated by a team of German chemists studying platinum ore, the mineral was named for the Rhine River. From 1925 until the 1960s, only two metric tons of rhenium were produced worldwide. Since then, its uses have steadily increased, including everything from unleaded gasoline to jet engines, and worldwide annual production now tops 45 metric tons.

  8. Software Management Metrics

    DTIC Science & Technology

    1988-05-01

    obtained from Dr. Barry Boehm’s Software 5650, Contract No. F19628-86-C-O001, Engineering Economics [1] and from T. J. ESD/MITRE Software Center Acquisition...of References 1. Boehm, Barry W., SoJtware Engineering 3. Halstead, M. H., Elements of SoJhtare Economics, Englewood Cliffs, New Science, New York...1983, pp. 639-648. 35 35 - Bibliography Beizer, B., Software System Testing and Pressman , Roger S., Software Engineering:QualtyO Assurance, New York: Van

  9. Grading the Metrics: Performance-Based Funding in the Florida State University System

    ERIC Educational Resources Information Center

    Cornelius, Luke M.; Cavanaugh, Terence W.

    2016-01-01

    A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…

  10. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination.

    PubMed

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-05-01

    The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=-2.487 (-2.040 to -0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=-2.272 (-0.028 to -0.002). ANOVA reported significant differences across years of experience (0-1, 1-2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.

  11. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  12. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  13. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  14. Space Launch System Accelerated Booster Development Cycle

    NASA Technical Reports Server (NTRS)

    Arockiam, Nicole; Whittecar, William; Edwards, Stephen

    2012-01-01

    With the retirement of the Space Shuttle, NASA is seeking to reinvigorate the national space program and recapture the public s interest in human space exploration by developing missions to the Moon, near-earth asteroids, Lagrange points, Mars, and beyond. The would-be successor to the Space Shuttle, NASA s Constellation Program, planned to take humans back to the Moon by 2020, but due to budgetary constraints was cancelled in 2010 in search of a more "affordable, sustainable, and realistic" concept2. Following a number of studies, the much anticipated Space Launch System (SLS) was unveiled in September of 2011. The SLS core architecture consists of a cryogenic first stage with five Space Shuttle Main Engines (SSMEs), and a cryogenic second stage using a new J-2X engine3. The baseline configuration employs two 5-segment solid rocket boosters to achieve a 70 metric ton payload capability, but a new, more capable booster system will be required to attain the goal of 130 metric tons to orbit. To this end, NASA s Marshall Space Flight Center recently released a NASA Research Announcement (NRA) entitled "Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction." The increased emphasis on affordability is evident in the language used in the NRA, which is focused on risk reduction "leading to an affordable Advanced Booster that meets the evolved capabilities of SLS" and "enabling competition" to "enhance SLS affordability. The purpose of the work presented in this paper is to perform an independent assessment of the elements that make up an affordable and realistic path forward for the SLS booster system, utilizing advanced design methods and technology evaluation techniques. The goal is to identify elements that will enable a more sustainable development program by exploring the trade space of heavy lift booster systems and focusing on affordability, operability, and reliability at the system and subsystem levels5. For this study, affordability is defined as lifecycle cost, which includes design, development, test, and engineering (DDT&E), production and operational costs (P&O). For this study, the system objectives include reducing DDT&E schedule by a factor of three, showing 99.9% reliability, flying up to four times per year, serving both crew and cargo missions, and evolving to a lift capability of 130 metric tons.3 After identifying gaps in the current system s capabilities, this study seeks to identify non-traditional and innovative technologies and processes that may improve performance in these areas and assess their impacts on booster system development. The DDT&E phase may be improved by incorporating incremental development testing and integrated demonstrations to mitigate risk. To further reduce DDT&E, this study will also consider how aspects of the booster system may have commonality with other users, such as the Department of Defense, commercial applications, or international partners; by sharing some of the risk and investment, the overall development cost may be reduced. Consideration is not limited to solid and liquid rocket boosters. A set of functional performance characteristics, such as engine thrust, specific impulse (Isp), mixture ratio, and throttle range are identified and their impacts on the system are evaluated. This study also identifies how such characteristics affect overall life cycle cost, including DDT&E and fixed and variable P&O.

  15. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  16. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  17. Improving Climate Projections Using "Intelligent" Ensembles

    NASA Technical Reports Server (NTRS)

    Baker, Noel C.; Taylor, Patrick C.

    2015-01-01

    Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.

  18. Comparing masked target transform volume (MTTV) clutter metric to human observer evaluation of visual clutter

    NASA Astrophysics Data System (ADS)

    Camp, H. A.; Moyer, Steven; Moore, Richard K.

    2010-04-01

    The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.

  19. Intelligent Work Process Engineering System

    NASA Technical Reports Server (NTRS)

    Williams, Kent E.

    2003-01-01

    Optimizing performance on work activities and processes requires metrics of performance for management to monitor and analyze in order to support further improvements in efficiency, effectiveness, safety, reliability and cost. Information systems are therefore required to assist management in making timely, informed decisions regarding these work processes and activities. Currently information systems regarding Space Shuttle maintenance and servicing do not exist to make such timely decisions. The work to be presented details a system which incorporates various automated and intelligent processes and analysis tools to capture organize and analyze work process related data, to make the necessary decisions to meet KSC organizational goals. The advantages and disadvantages of design alternatives to the development of such a system will be discussed including technologies, which would need to bedesigned, prototyped and evaluated.

  20. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  1. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  2. Multiscale Characterization of Engineered Cardiac Tissue Architecture.

    PubMed

    Drew, Nancy K; Johnsen, Nicholas E; Core, Jason Q; Grosberg, Anna

    2016-11-01

    In a properly contracting cardiac muscle, many different subcellular structures are organized into an intricate architecture. While it has been observed that this organization is altered in pathological conditions, the relationship between length-scales and architecture has not been properly explored. In this work, we utilize a variety of architecture metrics to quantify organization and consistency of single structures over multiple scales, from subcellular to tissue scale as well as correlation of organization of multiple structures. Specifically, as the best way to characterize cardiac tissues, we chose the orientational and co-orientational order parameters (COOPs). Similarly, neonatal rat ventricular myocytes were selected for their consistent architectural behavior. The engineered cells and tissues were stained for four architectural structures: actin, tubulin, sarcomeric z-lines, and nuclei. We applied the orientational metrics to cardiac cells of various shapes, isotropic cardiac tissues, and anisotropic globally aligned tissues. With these novel tools, we discovered: (1) the relationship between cellular shape and consistency of self-assembly; (2) the length-scales at which unguided tissues self-organize; and (3) the correlation or lack thereof between organization of actin fibrils, sarcomeric z-lines, tubulin fibrils, and nuclei. All of these together elucidate some of the current mysteries in the relationship between force production and architecture, while raising more questions about the effect of guidance cues on self-assembly function. These types of metrics are the future of quantitative tissue engineering in cardiovascular biomechanics.

  3. Causes for the decline of suspended-sediment discharge in the Mississippi River system, 1940-2007

    USGS Publications Warehouse

    Meade, R.H.; Moody, J.A.

    2010-01-01

    Before 1900, the Missouri-Mississippi River system transported an estimated 400 million metric tons per year of sediment from the interior of the United States to coastal Louisiana. During the last two decades (1987-2006), this transport has averaged 145 million metric tons per year. The cause for this substantial decrease in sediment has been attributed to the trapping characteristics of dams constructed on the muddy part of the Missouri River during the 1950s. However, reexamination of more than 60 years of water- and sediment-discharge data indicates that the dams alone are not the sole cause. These dams trap about 100-150 million metric tons per year, which represent about half the decrease in sediment discharge near the mouth of the Mississippi. Changes in relations between water discharge and suspended-sediment concentration suggest that the Missouri-Mississippi has been transformed from a transport-limited to a supply-limited system. Thus, other engineering activities such as meander cutoffs, river-training structures, and bank revetments as well as soil erosion controls have trapped sediment, eliminated sediment sources, or protected sediment that was once available for transport episodically throughout the year. Removing major engineering structures such as dams probably would not restore sediment discharges to pre-1900 state, mainly because of the numerous smaller engineering structures and other soil-retention works throughout the Missouri-Mississippi system. ?? 2009 John Wiley & Sons, Ltd.

  4. An Introduction to the Fundamentals of Chemistry for the Marine Engineer - An Audio-Tutorial Correspondence Course (CH-1C).

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    This document provides a study guide for a three-credit-hour fundamentals of chemistry course for marine engineer majors. The course is composed of 17 minicourses including: chemical reactions, atomic theory, solutions, corrosion, organic chemistry, water pollution, metric system, and remedial mathematics skills. Course grading, objectives,…

  5. Image quality metrics for volumetric laser displays

    NASA Astrophysics Data System (ADS)

    Williams, Rodney D.; Donohoo, Daniel

    1991-08-01

    This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prod'homme, A.; Drouvot, O.; Gregory, J.

    In 2009, Savannah River Remediation LLC (SRR) assumed the management lead of the Liquid Waste (LW) Program at the Savannah River Site (SRS). The four SRR partners and AREVA, as an integrated subcontractor are performing the ongoing effort to safely and reliably: - Close High Level Waste (HLW) storage tanks; - Maximize waste throughput at the Defense Waste Processing Facility (DWPF); - Process salt waste into stable final waste form; - Manage the HLW liquid waste material stored at SRS. As part of these initiatives, SRR and AREVA deployed a performance management methodology based on Overall Equipment Effectiveness (OEE) atmore » the DWPF in order to support the required production increase. This project took advantage of lessons learned by AREVA through the deployment of Total Productive Maintenance and Visual Management methodologies at the La Hague reprocessing facility in France. The project also took advantage of measurement data collected from different steps of the DWPF process by the SRR team (Melter Engineering, Chemical Process Engineering, Laboratory Operations, Plant Operations). Today the SRR team has a standard method for measuring processing time throughout the facility, a reliable source of objective data for use in decision-making at all levels, and a better balance between engineering department goals and operational goals. Preliminary results show that the deployment of this performance management methodology to the LW program at SRS has already significantly contributed to the DWPF throughput increases and is being deployed in the Saltstone facility. As part of the liquid waste program on Savannah River Site, SRR committed to enhance production throughput of DWPF. Beyond technical modifications implemented at different location of the facility, SRR deployed performance management methodology based on OEE metrics. The implementation benefited from the experience gained by AREVA in its own facilities in France. OEE proved to be a valuable tool in order to support the enhancement program in DWPF by providing unified metrics to measure plant performances, identify bottleneck location, and rank the most time consuming causes from objective data shared between the different groups belonging to the organization. Beyond OEE, the Visual Management tool adapted from the one used at La Hague were also provided in order to further enhance communication within the operating teams. As a result of all the initiatives implemented on DWPF, achieved production has been increased to record rates from FY10 to FY11. It is expected that thanks to the performance management tools now available within DWPF, these results will be sustained and even improved in the future to meet system plan targets. (authors)« less

  7. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  8. Climate Classification is an Important Factor in ­Assessing Hospital Performance Metrics

    NASA Astrophysics Data System (ADS)

    Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.

    2017-12-01

    Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.

  9. The validation by measurement theory of proposed object-oriented software metrics

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1994-01-01

    Moving software development into the engineering arena requires controllability, and to control a process, it must be measurable. Measuring the process does no good if the product is not also measured, i.e., being the best at producing an inferior product does not define a quality process. Also, not every number extracted from software development is a valid measurement. A valid measurement only results when we are able to verify that the number is representative of the attribute that we wish to measure. Many proposed software metrics are used by practitioners without these metrics ever having been validated, leading to costly but often useless calculations. Several researchers have bemoaned the lack of scientific precision in much of the published software measurement work and have called for validation of software metrics by measurement theory. This dissertation applies measurement theory to validate fifty proposed object-oriented software metrics (Li and Henry, 1993; Chidamber and Kemerrer, 1994; Lorenz and Kidd, 1994).

  10. Molecular cooperativity and compatibility via full atomistic simulation

    NASA Astrophysics Data System (ADS)

    Kwan Yang, Kenny

    Civil engineering has customarily focused on problems from a large-scale perspective, encompassing structures such as bridges, dams, and infrastructure. However, present day challenges in conjunction with advances in nanotechnology have forced a re-focusing of expertise. The use of atomistic and molecular approaches to study material systems opens the door to significantly improve material properties. The understanding that material systems themselves are structures, where their assemblies can dictate design capacities and failure modes makes this problem well suited for those who possess expertise in structural engineering. At the same time, a focus has been given to the performance metrics of materials at the nanoscale, including strength, toughness, and transport properties (e.g., electrical, thermal). Little effort has been made in the systematic characterization of system compatibility -- e.g., how to make disparate material building blocks behave in unison. This research attempts to develop bottom-up molecular scale understanding of material behavior, with the global objective being the application of this understanding into material design/characterization at an ultimate functional scale. In particular, it addresses the subject of cooperativity at the nano-scale. This research aims to define the conditions which dictate when discrete molecules may behave as a single, functional unit, thereby facilitating homogenization and up-scaling approaches, setting bounds for assembly, and providing a transferable assessment tool across molecular systems. Following a macro-scale pattern where the compatibility of deformation plays a vital role in the structural design, novel geometrical cooperativity metrics based on the gyration tensor are derived with the intention to define nano-cooperativity in a generalized way. The metrics objectively describe the general size, shape and orientation of the structure. To validate the derived measures, a pair of ideal macromolecules, where the density of cross-linking dictates cooperativity, is used to gauge the effectiveness of the triumvirate of gyration metrics. The metrics are shown to identify the critical number of cross-links that allowed the pair to deform together. The next step involves looking at the cooperativity features on a real system. We investigate a representative collagen molecule (i.e., tropocollagen), where single point mutations are known to produce kinks that create local unfolding. The results indicate that the metrics are effective, serving as a validation of the cooperativity metrics in a palpable material system. Finally a preliminary study on a carbon nanotube and collagen composite is proposed with a long-term objective of understanding the interactions between them as a means to corroborate experimental efforts in reproducing a d-banded collagen fiber. The emerging needs for more robust and resilient structures, as well as sustainable are serving as motivation to think beyond the traditional design methods. The characterization of cooperativity is thus key in materiomics, an emerging field that focuses on developing a "nano-to-macro" synergistic platform, which provides the necessary tools and procedures to validate future structural models and other critical behavior in a holistic manner, from atoms to application.

  11. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786

  12. Performance metrics for the assessment of satellite data products: an ocean color case study

    EPA Science Inventory

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...

  13. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  14. Performance Metrics for Soil Moisture Retrievals and Applications Requirements

    USDA-ARS?s Scientific Manuscript database

    Quadratic performance metrics such as root-mean-square error (RMSE) and time series correlation are often used to assess the accuracy of geophysical retrievals and true fields. These metrics are generally related; nevertheless each has advantages and disadvantages. In this study we explore the relat...

  15. SU-C-9A-02: Structured Noise Index as An Automated Quality Control for Nuclear Medicine: A Two Year Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, J; Christianson, O; Samei, E

    Purpose: Flood-field uniformity evaluation is an essential element in the assessment of nuclear medicine (NM) gamma cameras. It serves as the central element of the quality control (QC) program, acquired and analyzed on a daily basis prior to clinical imaging. Uniformity images are traditionally analyzed using pixel value-based metrics which often fail to capture subtle structure and patterns caused by changes in gamma camera performance requiring additional visual inspection which is subjective and time demanding. The goal of this project was to develop and implement a robust QC metrology for NM that is effective in identifying non-uniformity issues, reporting issuesmore » in a timely manner for efficient correction prior to clinical involvement, all incorporated into an automated effortless workflow, and to characterize the program over a two year period. Methods: A new quantitative uniformity analysis metric was developed based on 2D noise power spectrum metrology and confirmed based on expert observer visual analysis. The metric, termed Structured Noise Index (SNI) was then integrated into an automated program to analyze, archive, and report on daily NM QC uniformity images. The effectiveness of the program was evaluated over a period of 2 years. Results: The SNI metric successfully identified visually apparent non-uniformities overlooked by the pixel valuebased analysis methods. Implementation of the program has resulted in nonuniformity identification in about 12% of daily flood images. In addition, due to the vigilance of staff response, the percentage of days exceeding trigger value shows a decline over time. Conclusion: The SNI provides a robust quantification of the NM performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models. The automated process provides effective workflow within the NM spectra between physicist, technologist, and clinical engineer. The reliability of this process has made it the preferred platform for NM uniformity analysis.« less

  16. Acquire an Bruker Dimension FastScan (trademark) Atomic Force Microscope (AFM) for Materials, Physical and Biological Science Research and Education

    DTIC Science & Technology

    2016-04-14

    two super users, Drs. Biswajit Sannigrahi and Guangchang Zhou were trained by the Senior Engineer for Product Service, Dr. Teddy Huang from the... Engineering : The number of undergraduates funded by your agreement who graduated during this period and intend to work for the Department of Defense The...science, mathematics, engineering or technology fields: Student Metrics This section only applies to graduating undergraduates supported by this

  17. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  18. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination

    PubMed Central

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-01-01

    Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005

  19. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  20. A Survey of Health Management User Objectives Related to Diagnostic and Prognostic Metrics

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Kurtoglu, Tolga; Poll, Scott D.

    2010-01-01

    One of the most prominent technical challenges to effective deployment of health management systems is the vast difference in user objectives with respect to engineering development. In this paper, a detailed survey on the objectives of different users of health management systems is presented. These user objectives are then mapped to the metrics typically encountered in the development and testing of two main systems health management functions: diagnosis and prognosis. Using this mapping, the gaps between user goals and the metrics associated with diagnostics and prognostics are identified and presented with a collection of lessons learned from previous studies that include both industrial and military aerospace applications.

  1. Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.

    PubMed

    Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K

    2018-06-15

    The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.

  2. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  3. Airplane takeoff and landing performance monitoring system

    NASA Technical Reports Server (NTRS)

    Middleton, David B. (Inventor); Srivatsan, Raghavachari (Inventor); Person, Lee H. (Inventor)

    1989-01-01

    The invention is a real-time takeoff and landing performance monitoring system which provides the pilot with graphic and metric information to assist in decisions related to achieving rotation speed (V sub R) within the safe zone of the runway or stopping the aircraft on the runway after landing or take off abort. The system processes information in two segments: a pretakeoff segment and a real-time segment. One-time inputs of ambient conditions and airplane configuration information are used in the pretakeoff segment to generate scheduled performance data. The real-time segment uses the scheduled performance data, runway length data and transducer measured parameters to monitor the performance of the airplane throughout the takeoff roll. An important feature of this segment is that it updates the estimated runway rolling friction coefficient. Airplane performance predictions also reflect changes in headwind occurring as the takeoff roll progresses. The system displays the position of the airplane on the runway, indicating runway used and runway available, summarizes the critical information into a situation advisory flag, flags engine failures and off-nominal acceleration performance, and indicates where on the runway particular events such as decision speed (V sub 1), rotation speed (V sub R) and expected stop points will occur based on actual or predicted performance. The display also indicates airspeed, wind vector, engine pressure ratios, second segment climb speed, and balanced field length (BFL). The system detects performance deficiencies by comparing the airplane's present performance with a predicted nominal performance based upon the given conditions.

  4. What’s in a Prerequisite? A Mixed-Methods Approach to Identifying the Impact of a Prerequisite Course

    PubMed Central

    Sato, Brian K.; Lee, Amanda K.; Alam, Usman; Dang, Jennifer V.; Dacanay, Samantha J.; Morgado, Pedro; Pirino, Giorgia; Brunner, Jo Ellen; Castillo, Leanne A.; Chan, Valerie W.; Sandholtz, Judith H.

    2017-01-01

    Despite the ubiquity of prerequisites in undergraduate science, technology, engineering, and mathematics curricula, there has been minimal effort to assess their value in a data-driven manner. Using both quantitative and qualitative data, we examined the impact of prerequisites in the context of a microbiology lecture and lab course pairing. Through interviews and an online survey, students highlighted a number of positive attributes of prerequisites, including their role in knowledge acquisition, along with negative impacts, such as perhaps needlessly increasing time to degree and adding to the cost of education. We also identified a number of reasons why individuals do or do not enroll in prerequisite courses, many of which were not related to student learning. In our particular curriculum, students did not believe the microbiology lecture course impacted success in the lab, which agrees with our analysis of lab course performance using a previously established “familiarity” scale. These conclusions highlight the importance of soliciting and analyzing student feedback, and triangulating these data with quantitative performance metrics to assess the state of science, technology, engineering, and mathematics curricula. PMID:28232587

  5. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  6. Formulation of a parametric systems design framework for disaster response planning

    NASA Astrophysics Data System (ADS)

    Mma, Stephanie Weiya

    The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development capabilities, and combinations of these capabilities. For Shelby County and for Orleans Parish, the Response improvement plan reduced restoration time the most. For the combined capabilities, Shelby County experienced the greatest reduction in restoration time with the implementation of Development and Response capability improvements, and for Orleans Parish it was the Preparedness and Response capability improvements. Optimization of restoration time with community parameters was tested by using a Particle Swarm Optimization algorithm. Fifty different optimized restoration times were generated using the Particle Swarm Optimization algorithm and ranked using the Technique for Order Preference by Similarity to Ideal Solution. The optimization results indicate that the greatest reduction in restoration time for a community is achieved with a particular combination of different parameter values instead of the maximization of each parameter.

  7. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  8. An Evaluation of the IntelliMetric[SM] Essay Scoring System

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine

    2006-01-01

    This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…

  9. Microwave-Assisted Ignition for Improved Internal Combustion Engine Efficiency

    NASA Astrophysics Data System (ADS)

    DeFilippo, Anthony Cesar

    The ever-present need for reducing greenhouse gas emissions associated with transportation motivates this investigation of a novel ignition technology for internal combustion engine applications. Advanced engines can achieve higher efficiencies and reduced emissions by operating in regimes with diluted fuel-air mixtures and higher compression ratios, but the range of stable engine operation is constrained by combustion initiation and flame propagation when dilution levels are high. An advanced ignition technology that reliably extends the operating range of internal combustion engines will aid practical implementation of the next generation of high-efficiency engines. This dissertation contributes to next-generation ignition technology advancement by experimentally analyzing a prototype technology as well as developing a numerical model for the chemical processes governing microwave-assisted ignition. The microwave-assisted spark plug under development by Imagineering, Inc. of Japan has previously been shown to expand the stable operating range of gasoline-fueled engines through plasma-assisted combustion, but the factors limiting its operation were not well characterized. The present experimental study has two main goals. The first goal is to investigate the capability of the microwave-assisted spark plug towards expanding the stable operating range of wet-ethanol-fueled engines. The stability range is investigated by examining the coefficient of variation of indicated mean effective pressure as a metric for instability, and indicated specific ethanol consumption as a metric for efficiency. The second goal is to examine the factors affecting the extent to which microwaves enhance ignition processes. The factors impacting microwave enhancement of ignition processes are individually examined, using flame development behavior as a key metric in determining microwave effectiveness. Further development of practical combustion applications implementing microwave-assisted spark technology will benefit from predictive models which include the plasma processes governing the observed combustion enhancement. This dissertation documents the development of a chemical kinetic mechanism for the plasma-assisted combustion processes relevant to microwave-assisted spark ignition. The mechanism includes an existing mechanism for gas-phase methane oxidation, supplemented with electron impact reactions, cation and anion chemical reactions, and reactions involving vibrationally-excited and electronically-excited species. Calculations using the presently-developed numerical model explain experimentally-observed trends, highlighting the relative importance of pressure, temperature, and mixture composition in determining the effectiveness of microwave-assisted ignition enhancement.

  10. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  11. On use of CO{sub 2} chemiluminescence for combustion metrics in natural gas fired reciprocating engines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, S. B.; Bihari, B.; Biruduganti, M.

    Flame chemiluminescence is widely acknowledged to be an indicator of heat release rate in premixed turbulent flames that are representative of gas turbine combustion. Though heat release rate is an important metric for evaluating combustion strategies in reciprocating engine systems, its correlation with flame chemiluminescence is not well studied. To address this gap an experimental study was carried out in a single-cylinder natural gas fired reciprocating engine that could simulate turbocharged conditions with exhaust gas recirculation. Crank angle resolved spectra (266-795 nm) of flame luminosity were measured for various operational conditions by varying the ignition timing for MBT conditions andmore » by holding the speed at 1800 rpm and Brake Mean effective Pressure (BMEP) at 12 bar. The effect of dilution on CO*{sub 2}chemiluminescence intensities was studied, by varying the global equivalence ratio (0.6-1.0) and by varying the exhaust gas recirculation rate. It was attempted to relate the measured chemiluminescence intensities to thermodynamic metrics of importance to engine research -- in-cylinder bulk gas temperature and heat release rate (HRR) calculated from measured cylinder pressure signals. The peak of the measured CO*{sub 2} chemiluminescence intensities coincided with peak pressures within {+-}2 CAD for all test conditions. For each combustion cycle, the peaks of heat release rate, spectral intensity and temperature occurred in that sequence, well separated temporally. The peak heat release rates preceded the peak chemiluminescent emissions by 3.8-9.5 CAD, whereas the peak temperatures trailed by 5.8-15.6 CAD. Such a temporal separation precludes correlations on a crank-angle resolved basis. However, the peak cycle heat release rates and to a lesser extent the peak cycle temperatures correlated well with the chemiluminescent emission from CO*{sub 2}. Such observations point towards the potential use of flame chemiluminescence to monitor peak bulk gas temperatures as well as peak heat release rates in natural gas fired reciprocating engines.« less

  12. Proceedings of the Workshop on software tools for distributed intelligent control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herget, C.J.

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less

  13. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less

  14. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    PubMed

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  15. Systems Engineering Techniques for ALS Decision Making

    NASA Technical Reports Server (NTRS)

    Rodriquez, Luis F.; Drysdale, Alan E.; Jones, Harry; Levri, Julie A.

    2004-01-01

    The Advanced Life Support (ALS) Metric is the predominant tool for predicting the cost of ALS systems. Metric goals for the ALS Program are daunting, requiring a threefold increase in the ALS Metric by 2010. Confounding the problem, the rate new ALS technologies reach the maturity required for consideration in the ALS Metric and the rate at which new configurations are developed is slow, limiting the search space and potentially giving the perspective of a ALS technology, the ALS Metric may remain elusive. This paper is a sequel to a paper published in the proceedings of the 2003 ICES conference entitled, "Managing to the metric: an approach to optimizing life support costs." The conclusions of that paper state that the largest contributors to the ALS Metric should be targeted by ALS researchers and management for maximum metric reductions. Certainly, these areas potentially offer large potential benefits to future ALS missions; however, the ALS Metric is not the only decision-making tool available to the community. To facilitate decision-making within the ALS community a combination of metrics should be utilized, such as the Equivalent System Mass (ESM)-based ALS metric, but also those available through techniques such as life cycle costing and faithful consideration of the sensitivity of the assumed models and data. Often a lack of data is cited as the reason why these techniques are not considered for utilization. An existing database development effort within the ALS community, known as OPIS, may provide the opportunity to collect the necessary information to enable the proposed systems analyses. A review of these additional analysis techniques is provided, focusing on the data necessary to enable these. The discussion is concluded by proposing how the data may be utilized by analysts in the future.

  16. Spiral-like multi-beam emission via transformation electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tichit, Paul-Henri, E-mail: paul-henri.tichit@u-psud.fr; Burokur, Shah Nawaz, E-mail: shah-nawaz.burokur@u-psud.fr; Lustrac, André de, E-mail: andre.de-lustrac@u-psud.fr

    Transformation electromagnetics offers an unconventional approach for the design of novel radiating devices. Here, we propose an electromagnetic metamaterial able to split an isotropic radiation into multiple directive beams. By applying transformations that modify distance and angles, we show how the multiple directive beams can be steered at will. We describe transformation of the metric space and the calculation of the material parameters. Different transformations are proposed for a possible physical realization through the use of engineered artificial metamaterials. Full wave simulations are performed to validate the proposed approach. The idea paves the way to interesting applications in various domainsmore » in microwave and optical regimes.« less

  17. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    PubMed

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.

  18. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    NASA Technical Reports Server (NTRS)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  19. STS 2000: Structural design of the airbreathing launcher

    NASA Astrophysics Data System (ADS)

    Boyeldieu, E.

    This paper presents a description of the structural design and the choice of materials of the different parts of the Space Transportation System 2000 (STS 2000). This launcher is one of the different concepts studied by AEROSPATIALE to evaluate its feasibility and its performance. The STS 2000 Single-Stage-To-Orbit (SSTO) is a reusable single stage launcher using airbreathing propulsion till Mach 6. This SSTO takes off horizontally using an undercarriage It takes off with a speed of 150 m/s and with an incidence angle of 12 deg. The STS 2000 flights from Mach 0.4 to Mach 3.6 using four turbo-rockets engines, from Mach 3.6 to Mach 6 using four ramjets-rockets engines and from Mach 6 to Mach 25 using four rockets engines. During its reentry, it glides from orbit to earth and it horizontally lands at the same base (KOUROU in French Guiana). The initial take-off mass is 338 metric tons. The ascent phase specification are: a maximum axial acceleration of 4 g's and a maximum dynamic pressure of 70 kPa.

  20. Raman Spectroscopy Reveals New Insights into the Zonal Organization of Native and Tissue-Engineered Articular Cartilage

    PubMed Central

    2016-01-01

    Tissue architecture is intimately linked with its functions, and loss of tissue organization is often associated with pathologies. The intricate depth-dependent extracellular matrix (ECM) arrangement in articular cartilage is critical to its biomechanical functions. In this study, we developed a Raman spectroscopic imaging approach to gain new insight into the depth-dependent arrangement of native and tissue-engineered articular cartilage using bovine tissues and cells. Our results revealed previously unreported tissue complexity into at least six zones above the tidemark based on a principal component analysis and k-means clustering analysis of the distribution and orientation of the main ECM components. Correlation of nanoindentation and Raman spectroscopic data suggested that the biomechanics across the tissue depth are influenced by ECM microstructure rather than composition. Further, Raman spectroscopy together with multivariate analysis revealed changes in the collagen, glycosaminoglycan, and water distributions in tissue-engineered constructs over time. These changes were assessed using simple metrics that promise to instruct efforts toward the regeneration of a broad range of tissues with native zonal complexity and functional performance. PMID:28058277

  1. Feasibility of and Rationale for the Collection of Orthopaedic Trauma Surgery Quality of Care Metrics.

    PubMed

    Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip

    2017-06-01

    Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.

  2. Engineering survey planning for the alignment of a particle accelerator: part I. Proposition of an assessment method

    NASA Astrophysics Data System (ADS)

    Junqueira Leão, Rodrigo; Raffaelo Baldo, Crhistian; Collucci da Costa Reis, Maria Luisa; Alves Trabanco, Jorge Luiz

    2018-03-01

    The performance of particle accelerators depends highly on the relative alignment between their components. The position and orientation of the magnetic lenses that form the trajectory of the charged beam is kept to micrometric tolerances in a range of hundreds of meters of the length of the machines. Therefore, the alignment problem is fundamentally of a dimensional metrology nature. There is no common way of expressing these tolerances in terms of terminology and alignment concept. The alignment needs for a certain machine is normally given in terms of deviations between the position of any magnet in the accelerator and the fitted line that relates the actual position of the magnets’ assembly. Root mean square errors and standard deviations are normally used interchangeably and measurement uncertainty is often neglected. Although some solutions have been employed successfully in several accelerators, there is no off-the-shelf solution to perform the alignment. Also, each alignment campaign makes use of different measuring instruments to achieve the desired results, which makes the alignment process a complex measurement chain. This paper explores these issues by reviewing the tolerances specified for the alignment of particle accelerators, and proposes a metric to assess the quality of the alignment. The metric has the advantage of fully integrating the measurement uncertainty in the process.

  3. Deriving principal channel metrics from bank and long-profile geometry with the R package cmgo

    NASA Astrophysics Data System (ADS)

    Golly, Antonius; Turowski, Jens M.

    2017-09-01

    Landscape patterns result from landscape forming processes. This link can be exploited in geomorphological research by reversely analyzing the geometrical content of landscapes to develop or confirm theories of the underlying processes. Since rivers represent a dominant control on landscape formation, there is a particular interest in examining channel metrics in a quantitative and objective manner. For example, river cross-section geometry is required to model local flow hydraulics, which in turn determine erosion and thus channel dynamics. Similarly, channel geometry is crucial for engineering purposes, water resource management, and ecological restoration efforts. These applications require a framework to capture and derive the data. In this paper we present an open-source software tool that performs the calculation of several channel metrics (length, slope, width, bank retreat, knickpoints, etc.) in an objective and reproducible way based on principal bank geometry that can be measured in the field or in a GIS. Furthermore, the software provides a framework to integrate spatial features, for example the abundance of species or the occurrence of knickpoints. The program is available at https://github.com/AntoniusGolly/cmgo and is free to use, modify, and redistribute under the terms of the GNU General Public License version 3 as published by the Free Software Foundation.

  4. Does cone beam CT actually ameliorate stab wound analysis in bone?

    PubMed

    Gaudio, D; Di Giancamillo, M; Gibelli, D; Galassi, A; Cerutti, E; Cattaneo, C

    2014-01-01

    This study aims at verifying the potential of a recent radiological technology, cone beam CT (CBCT), for the reproduction of digital 3D models which may allow the user to verify the inner morphology of sharp force wounds within the bone tissue. Several sharp force wounds were produced by both single and double cutting edge weapons on cancellous and cortical bone, and then acquired by cone beam CT scan. The lesions were analysed by different software (a DICOM file viewer and reverse engineering software). Results verified the limited performances of such technology for lesions made on cortical bone, whereas on cancellous bone reliable models were obtained, and the precise morphology within the bone tissues was visible. On the basis of such results, a method for differential diagnosis between cutmarks by sharp tools with a single and two cutting edges can be proposed. On the other hand, the metrical computerised analysis of lesions highlights a clear increase of error range for measurements under 3 mm. Metric data taken by different operators shows a strong dispersion (% relative standard deviation). This pilot study shows that the use of CBCT technology can improve the investigation of morphological stab wounds on cancellous bone. Conversely metric analysis of the lesions as well as morphological analysis of wound dimension under 3 mm do not seem to be reliable.

  5. Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.

    2003-01-01

    Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.

  6. Structural texture similarity metrics for image analysis and retrieval.

    PubMed

    Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L

    2013-07-01

    We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.

  7. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  8. Use of Traditional and Novel Methods to Evaluate the Influence of an EVA Glove on Hand Performance

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth A.; England, Scott A.; Mesloh, Miranda; Thompson, Shelby; ajulu, Sudhakar

    2010-01-01

    The gloved hand is one of an astronaut s primary means of interacting with the environment, and any restrictions imposed by the glove can strongly affect performance during extravehicular activity (EVA). Glove restrictions have been the subject of study for decades, yet previous studies have generally been unsuccessful in quantifying glove mobility and tactility. Past studies have tended to focus on the dexterity, strength, and functional performance of the gloved hand; this provides only a circumspect analysis of the impact of each type of restriction on the glove s overall capability. The aim of this study was to develop novel capabilities to provide metrics for mobility and tactility that can be used to assess the performance of a glove in a way that could enable designers and engineers to improve their current designs. A series of evaluations were performed to compare unpressurized and pressurized (4.3 psi) gloved conditions with the ungloved condition. A second series of evaluations were performed with the Thermal Micrometeoroid Garment (TMG) removed. This series of tests provided interesting insight into how much of an effect the TMG has on gloved mobility - in some cases, the presence of the TMG restricted glove mobility as much as pressurization did. Previous hypotheses had assumed that the TMG would have a much lower impact on mobility, but these results suggest that an improvement in the design of the TMG could have a significant impact on glove performance. Tactility testing illustrated the effect of glove pressurization, provided insight into the design of hardware that interfaces with the glove, and highlighted areas of concern. The metrics developed in this study served to benchmark the Phase VI EVA glove and to develop requirements for the next-generation glove for the Constellation program.

  9. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    PubMed

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  10. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.

  11. Metric for evaluation of filter efficiency in spectral cameras.

    PubMed

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  12. Asset sustainability index : quick guide : proposed metrics for the long-term financial sustainability of highway networks.

    DOT National Transportation Integrated Search

    2013-04-01

    "This report provides a Quick Guide to the concept of asset sustainability metrics. Such metrics address the long-term performance of highway assets based upon expected expenditure levels. : It examines how such metrics are used in Australia, Britain...

  13. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  14. The use of player physical and technical skill match activity profiles to predict position in the Australian Football League draft.

    PubMed

    Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam

    2017-02-01

    This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.

  15. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  16. SU-C-BRB-05: Determining the Adequacy of Auto-Contouring Via Probabilistic Assessment of Ensuing Treatment Plan Metrics in Comparison with Manual Contours

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nourzadeh, H; Watkins, W; Siebers, J

    Purpose: To determine if auto-contour and manual-contour—based plans differ when evaluated with respect to probabilistic coverage metrics and biological model endpoints for prostate IMRT. Methods: Manual and auto-contours were created for 149 CT image sets acquired from 16 unique prostate patients. A single physician manually contoured all images. Auto-contouring was completed utilizing Pinnacle’s Smart Probabilistic Image Contouring Engine (SPICE). For each CT, three different 78 Gy/39 fraction 7-beam IMRT plans are created; PD with drawn ROIs, PAS with auto-contoured ROIs, and PM with auto-contoured OARs with the manually drawn target. For each plan, 1000 virtual treatment simulations with different sampledmore » systematic errors for each simulation and a different sampled random error for each fraction were performed using our in-house GPU-accelerated robustness analyzer tool which reports the statistical probability of achieving dose-volume metrics, NTCP, TCP, and the probability of achieving the optimization criteria for both auto-contoured (AS) and manually drawn (D) ROIs. Metrics are reported for all possible cross-evaluation pairs of ROI types (AS,D) and planning scenarios (PD,PAS,PM). Bhattacharyya coefficient (BC) is calculated to measure the PDF similarities for the dose-volume metric, NTCP, TCP, and objectives with respect to the manually drawn contour evaluated on base plan (D-PD). Results: We observe high BC values (BC≥0.94) for all OAR objectives. BC values of max dose objective on CTV also signify high resemblance (BC≥0.93) between the distributions. On the other hand, BC values for CTV’s D95 and Dmin objectives are small for AS-PM, AS-PD. NTCP distributions are similar across all evaluation pairs, while TCP distributions of AS-PM, AS-PD sustain variations up to %6 compared to other evaluated pairs. Conclusion: No significant probabilistic differences are observed in the metrics when auto-contoured OARs are used. The prostate auto-contour needs improvement to achieve clinically equivalent plans.« less

  17. Metric traffic signal design manual

    DOT National Transportation Integrated Search

    2003-03-01

    This manual is for information purposes only and may be used to aid new employees, and those unfamiliar with ODOT Traffic Engineering practices, in accessing and applying applicable standards, statutes, rules, and policies related to railroad preempt...

  18. OWL2 benchmarking for the evaluation of knowledge based systems.

    PubMed

    Khan, Sher Afgun; Qadir, Muhammad Abdul; Abbas, Muhammad Azeem; Afzal, Muhammad Tanvir

    2017-01-01

    OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS) using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert) would be able to select a suitable KBS appropriate for his domain.

  19. Assessment of pilot workload - Converging measures from performance based, subjective and psychophysiological techniques

    NASA Technical Reports Server (NTRS)

    Kramer, Arthur F.; Sirevaag, Erik J.; Braune, Rolf

    1986-01-01

    This study explores the relationship between the P300 component of the event-related brain potential (ERP) and the processing demands of a complex real-world task. Seven male volunteers enrolled in an Instrument Flight Rule (IFR) aviation course flew a series of missions in a single engine fixed-based simulator. In dual task conditions subjects were also required to discriminate between two tones differing in frequency. ERPs time-locked to the tones, subjective effort ratings and overt performance measures were collected during two 45 min flights differing in difficulty (manipulated by varying both atmospheric conditions and instrument reliability). The more difficult flight was associated with poorer performance, increased subjective effort ratings, and smaller secondary task P300s. Within each flight, P300 amplitude was negatively correlated with deviations from command headings indicating that P300 amplitude was a sensitive workload metric both between and within the flight missions.

  20. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Sean J; Alvarez, Paul

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  1. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    PubMed

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  2. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  3. Questionable validity of the catheter-associated urinary tract infection metric used for value-based purchasing.

    PubMed

    Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K

    2015-10-01

    Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Applying Sigma Metrics to Reduce Outliers.

    PubMed

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Quality Measures in Stroke

    PubMed Central

    Poisson, Sharon N.; Josephson, S. Andrew

    2011-01-01

    Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840

  6. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions

    NASA Astrophysics Data System (ADS)

    Gide, Milind S.; Karam, Lina J.

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.

  7. Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?

    PubMed

    Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J

    2014-02-01

    Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.

  8. Measuring β-diversity with species abundance data.

    PubMed

    Barwell, Louise J; Isaac, Nick J B; Kunin, William E

    2015-07-01

    In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B  = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  9. An Exploratory Study of OEE Implementation in Indian Manufacturing Companies

    NASA Astrophysics Data System (ADS)

    Kumar, J.; Soni, V. K.

    2015-04-01

    Globally, the implementation of Overall equipment effectiveness (OEE) has proven to be highly effective in improving availability, performance rate and quality rate while reducing unscheduled breakdown and wastage that stems from the equipment. This paper investigates the present status and future scope of OEE metrics in Indian manufacturing companies through an extensive survey. In this survey, opinions of Production and Maintenance Managers have been analyzed statistically to explore the relationship between factors, perspective of OEE and potential use of OEE metrics. Although the sample has been divers in terms of product, process type, size, and geographic location of the companies, they are enforced to implement improvement techniques such as OEE metrics to improve performance. The findings reveal that OEE metrics has huge potential and scope to improve performance. Responses indicate that Indian companies are aware of OEE but they are not utilizing full potential of OEE metrics.

  10. A neural net-based approach to software metrics

    NASA Technical Reports Server (NTRS)

    Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.

    1992-01-01

    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.

  11. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1991-01-01

    NASA's principal metrication accomplishments for FY 1990 were establishment of metrication policy for major programs, development of an implementing instruction for overall metric policy and initiation of metrication planning for the major program offices. In FY 1991, development of an overall NASA plan and individual program office plans will be completed, requirement assessments will be performed for all support areas, and detailed assessment and transition planning will be undertaken at the institutional level. Metric feasibility decisions on a number of major programs are expected over the next 18 months.

  12. System Engineering Concept Demonstration, System Engineering Needs. Volume 2

    DTIC Science & Technology

    1992-12-01

    changeability, and invisibility. "Software entities are perhaps more complex for their size than any other human construct..." In addition, software is... human actions and interactions that often fail or insufficient in large organizations. Specific needs in this area include the following: " Each...needed to accomplish incremental review and critique of information. * Automi ..-’ metrics support is needed for the measuring ikey quality aspects of

  13. SI (Metric) handbook

    NASA Technical Reports Server (NTRS)

    Artusa, Elisa A.

    1994-01-01

    This guide provides information for an understanding of SI units, symbols, and prefixes; style and usage in documentation in both the US and in the international business community; conversion techniques; limits, fits, and tolerance data; and drawing and technical writing guidelines. Also provided is information of SI usage for specialized applications like data processing and computer programming, science, engineering, and construction. Related information in the appendixes include legislative documents, historical and biographical data, a list of metric documentation, rules for determining significant digits and rounding, conversion factors, shorthand notation, and a unit index.

  14. Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems

    DTIC Science & Technology

    1991-09-01

    reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI

  15. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    PubMed

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  16. Long-term health experience of jet engine manufacturing workers: VI: incidence of malignant central nervous system neoplasms in relation to estimated workplace exposures.

    PubMed

    Marsh, Gary M; Youk, Ada O; Buchanich, Jeanine M; Xu, Hui; Downing, Sarah; Kennedy, Kathleen J; Esmen, Nurtan A; Hancock, Roger P; Lacey, Steven E; Fleissner, Mary Lou

    2013-06-01

    To determine whether glioblastoma (GB) incidence rates among jet engine manufacturing workers were associated with specific chemical or physical exposures. Subjects were 210,784 workers employed from 1952 to 2001. We conducted a cohort incidence study and two nested case-control studies with focus on the North Haven facility where we previously observed a not statistically significant overall elevation in GB rates. We estimated individual-level exposure metrics for 11 agents. In the total cohort, none of the agent metrics considered was associated with increased GB risk. The GB incidence rates in North Haven were also not related to workplace exposures, including the "blue haze" exposure unique to North Haven. If not due to chance alone, GB rates in North Haven may reflect external occupational factors, nonoccupational factors, or workplace factors unique to North Haven unmeasured in the current evaluation.

  17. Human Engineering of Space Vehicle Displays and Controls

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Holden, Kritina L.; Boyer, Jennifer; Stephens, John-Paul; Ezer, Neta; Sandor, Aniko

    2010-01-01

    Proper attention to the integration of the human needs in the vehicle displays and controls design process creates a safe and productive environment for crew. Although this integration is critical for all phases of flight, for crew interfaces that are used during dynamic phases (e.g., ascent and entry), the integration is particularly important because of demanding environmental conditions. This panel addresses the process of how human engineering involvement ensures that human-system integration occurs early in the design and development process and continues throughout the lifecycle of a vehicle. This process includes the development of requirements and quantitative metrics to measure design success, research on fundamental design questions, human-in-the-loop evaluations, and iterative design. Processes and results from research on displays and controls; the creation and validation of usability, workload, and consistency metrics; and the design and evaluation of crew interfaces for NASA's Crew Exploration Vehicle are used as case studies.

  18. Restaurant Energy Use Benchmarking Guideline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  19. Improving Department of Defense Global Distribution Performance Through Network Analysis

    DTIC Science & Technology

    2016-06-01

    network performance increase. 14. SUBJECT TERMS supply chain metrics, distribution networks, requisition shipping time, strategic distribution database...peace and war” (p. 4). USTRANSCOM Metrics and Analysis Branch defines, develops, tracks, and maintains outcomes- based supply chain metrics to...2014a, p. 8). The Joint Staff defines a TDD standard as the maximum number of days the supply chain can take to deliver requisitioned materiel

  20. Tide or Tsunami? The Impact of Metrics on Scholarly Research

    ERIC Educational Resources Information Center

    Bonnell, Andrew G.

    2016-01-01

    Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…

  1. On Railroad Tank Car Puncture Performance: Part I - Considering Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the first in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perform...

  2. Tracking occupational hearing loss across global industries: A comparative analysis of metrics

    PubMed Central

    Rabinowitz, Peter M.; Galusha, Deron; McTague, Michael F.; Slade, Martin D.; Wesdock, James C.; Dixon-Ernst, Christine

    2013-01-01

    Occupational hearing loss is one of the most prevalent occupational conditions; yet, there is no acknowledged international metric to allow comparisons of risk between different industries and regions. In order to make recommendations for an international standard of occupational hearing loss, members of an international industry group (the International Aluminium Association) submitted details of different hearing loss metrics currently in use by members. We compared the performance of these metrics using an audiometric data set for over 6000 individuals working in 10 locations of one member company. We calculated rates for each metric at each location from 2002 to 2006. For comparison, we calculated the difference of observed–expected (for age) binaural high frequency hearing loss (in dB/year) for each location over the same time period. We performed linear regression to determine the correlation between each metric and the observed–expected rate of hearing loss. The different metrics produced discrepant results, with annual rates ranging from 0.0% for a less-sensitive metric to more than 10% for a highly sensitive metric. At least two metrics, a 10 dB age-corrected threshold shift from baseline and a 15 dB nonage-corrected shift metric, correlated well with the difference of observed–expected high-frequency hearing loss. This study suggests that it is feasible to develop an international standard for tracking occupational hearing loss in industrial working populations. PMID:22387709

  3. Do Your Students Measure Up Metrically?

    ERIC Educational Resources Information Center

    Taylor, P. Mark; Simms, Ken; Kim, Ok-Kyeong; Reys, Robert E.

    2001-01-01

    Examines released metric items from the Third International Mathematics and Science Study (TIMSS) and the 3rd and 4th grade results. Recommends refocusing instruction on the metric system to improve student performance in measurement. (KHR)

  4. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  5. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  6. Advanced Reciprocating Engine Systems (ARES) Research at Argonne National Laboratory. A Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Sreenath; Biruduganti, Muni; Bihari, Bipin

    The goals of these experiments were to determine the potential of employing spectral measurements to deduce combustion metrics such as HRR, combustion temperatures, and equivalence ratios in a natural gas-fired reciprocating engine. A laser-ignited, natural gas-fired single-cylinder research engine was operated at various equivalence ratios between 0.6 and 1.0, while varying the EGR levels between 0% and maximum to thereby ensure steady combustion. Crank angle-resolved spectral signatures were collected over 266-795 nm, encompassing chemiluminescence emissions from OH*, CH*, and predominantly by CO2* species. Further, laser-induced gas breakdown spectra were recorded under various engine operating conditions.

  7. Signal Processing Methods for Liquid Rocket Engine Combustion Spontaneous Stability and Rough Combustion Assessments

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Casiano, Matthew; Fischbach, Sean; Hulka, James R.

    2012-01-01

    Liquid rocket engine combustion stability assessments are traditionally broken into three categories: dynamic stability, spontaneous stability, and rough combustion. This work focuses on comparing the spontaneous stability and rough combustion assessments for several liquid engine programs. The techniques used are those developed at Marshall Space Flight Center (MSFC) for the J-2X Workhorse Gas Generator program. Stability assessment data from the Integrated Powerhead Demonstrator (IPD), FASTRAC, and Common Extensible Cryogenic Engine (CECE) programs are compared against previously processed J-2X Gas Generator data. Prior metrics for spontaneous stability assessments are updated based on the compilation of all data sets.

  8. Context and meter enhance long-range planning in music performance

    PubMed Central

    Mathias, Brian; Pfordresher, Peter Q.; Palmer, Caroline

    2015-01-01

    Neural responses demonstrate evidence of resonance, or oscillation, during the production of periodic auditory events. Music contains periodic auditory events that give rise to a sense of beat, which in turn generates a sense of meter on the basis of multiple periodicities. Metrical hierarchies may aid memory for music by facilitating similarity-based associations among sequence events at different periodic distances that unfold in longer contexts. A fundamental question is how metrical associations arising from a musical context influence memory during music performance. Longer contexts may facilitate metrical associations at higher hierarchical levels more than shorter contexts, a prediction of the range model, a formal model of planning processes in music performance (Palmer and Pfordresher, 2003; Pfordresher et al., 2007). Serial ordering errors, in which intended sequence events are produced in incorrect sequence positions, were measured as skilled pianists performed musical pieces that contained excerpts embedded in long or short musical contexts. Pitch errors arose from metrically similar positions and further sequential distances more often when the excerpt was embedded in long contexts compared to short contexts. Musicians’ keystroke intensities and error rates also revealed influences of metrical hierarchies, which differed for performances in long and short contexts. The range model accounted for contextual effects and provided better fits to empirical findings when metrical associations between sequence events were included. Longer sequence contexts may facilitate planning during sequence production by increasing conceptual similarity between hierarchically associated events. These findings are consistent with the notion that neural oscillations at multiple periodicities may strengthen metrical associations across sequence events during planning. PMID:25628550

  9. Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.; hide

    2011-01-01

    Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.

  10. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Pesticide Factsheets

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  11. Greenroads : a sustainability performance metric for roadway design and construction.

    DOT National Transportation Integrated Search

    2009-11-01

    Greenroads is a performance metric for quantifying sustainable practices associated with roadway design and construction. Sustainability is defined as having seven key components: ecology, equity, economy, extent, expectations, experience and exposur...

  12. Performance metrics used by freight transport providers.

    DOT National Transportation Integrated Search

    2008-09-30

    The newly-established National Cooperative Freight Research Program (NCFRP) has allocated $300,000 in funding to a project entitled Performance Metrics for Freight Transportation (NCFRP 03). The project is scheduled for completion in September ...

  13. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  14. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.

  15. Measuring strategic success.

    PubMed

    Gish, Ryan

    2002-08-01

    Strategic triggers and metrics help healthcare providers achieve financial success. Metrics help assess progress toward long-term goals. Triggers signal market changes requiring a change in strategy. All metrics may not move in concert. Organizations need to identify indicators, monitor performance.

  16. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  17. Photographic assessment of retroreflective film properties

    NASA Astrophysics Data System (ADS)

    Burgess, G.; Shortis, M. R.; Scott, P.

    2011-09-01

    Retroreflective film is used widely for target manufacture in close-range photogrammetry, especially where high precision is required for applications in industrial or engineering metrology. 3M Scotchlite 7610 high gain reflective sheeting is the gold standard for retroreflective targets because of the high level of response for incidence angles up to 60°. Retroreflective film is now widely used in the transport industry for signage and many other types of film have become available. This study reports on the performance of six types of retroreflective sheeting, including 7610, based on published metrics for reflectance. Measurements were made using a camera and flash, so as to be directly applicable to photogrammetry. Analysis of the results from this project and the assessment of previous research indicates that the use of standards is essential to enable a valid comparison of retroreflective performance.

  18. AN ADVANCED SYSTEM FOR POLLUTION PREVENTION IN CHEMICAL COMPLEXES

    EPA Science Inventory

    One important accomplishment is that the system will give process engineers interactively and simultaneously use of programs for total cost analysis, life cycle assessment and sustainability metrics to provide direction for the optimal chemical complex analysis pro...

  19. Aircraft noise prediction program theoretical manual: Rotorcraft System Noise Prediction System (ROTONET), part 4

    NASA Technical Reports Server (NTRS)

    Weir, Donald S.; Jumper, Stephen J.; Burley, Casey L.; Golub, Robert A.

    1995-01-01

    This document describes the theoretical methods used in the rotorcraft noise prediction system (ROTONET), which is a part of the NASA Aircraft Noise Prediction Program (ANOPP). The ANOPP code consists of an executive, database manager, and prediction modules for jet engine, propeller, and rotor noise. The ROTONET subsystem contains modules for the prediction of rotor airloads and performance with momentum theory and prescribed wake aerodynamics, rotor tone noise with compact chordwise and full-surface solutions to the Ffowcs-Williams-Hawkings equations, semiempirical airfoil broadband noise, and turbulence ingestion broadband noise. Flight dynamics, atmosphere propagation, and noise metric calculations are covered in NASA TM-83199, Parts 1, 2, and 3.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almajali, Anas; Rice, Eric; Viswanathan, Arun

    This paper presents a systems analysis approach to characterizing the risk of a Smart Grid to a load-drop attack. A characterization of the risk is necessary for the design of detection and remediation strategies to address the consequences of such attacks. Using concepts from systems health management and system engineering, this work (a) first identifies metrics that can be used to generate constraints for security features, and (b) lays out an end-to-end integrated methodology using separate network and power simulations to assess system risk. We demonstrate our approach by performing a systems-style analysis of a load-drop attack implemented over themore » AMI subsystem and targeted at destabilizing the underlying power grid.« less

  1. Foul tip impact attenuation of baseball catcher masks using head impact metrics

    PubMed Central

    White, Terrance R.; Cutcliffe, Hattie C.; Shridharani, Jay K.; Wood, Garrett W.; Bass, Cameron R.

    2018-01-01

    Currently, no scientific consensus exists on the relative safety of catcher mask styles and materials. Due to differences in mass and material properties, the style and material of a catcher mask influences the impact metrics observed during simulated foul ball impacts. The catcher surrogate was a Hybrid III head and neck equipped with a six degree of freedom sensor package to obtain linear accelerations and angular rates. Four mask styles were impacted using an air cannon for six 30 m/s and six 35 m/s impacts to the nasion. To quantify impact severity, the metrics peak linear acceleration, peak angular acceleration, Head Injury Criterion, Head Impact Power, and Gadd Severity Index were used. An Analysis of Covariance and a Tukey’s HSD Test were conducted to compare the least squares mean between masks for each head injury metric. For each injury metric a P-Value less than 0.05 was found indicating a significant difference in mask performance. Tukey’s HSD test found for each metric, the traditional style titanium mask fell in the lowest performance category while the hockey style mask was in the highest performance category. Limitations of this study prevented a direct correlation from mask testing performance to mild traumatic brain injury. PMID:29856814

  2. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  3. Proposed Performance-Based Metrics for the Future Funding of Graduate Medical Education: Starting the Conversation.

    PubMed

    Caverzagie, Kelly J; Lane, Susan W; Sharma, Niraj; Donnelly, John; Jaeger, Jeffrey R; Laird-Fick, Heather; Moriarty, John P; Moyer, Darilyn V; Wallach, Sara L; Wardrop, Richard M; Steinmann, Alwin F

    2017-12-12

    Graduate medical education (GME) in the United States is financed by contributions from both federal and state entities that total over $15 billion annually. Within institutions, these funds are distributed with limited transparency to achieve ill-defined outcomes. To address this, the Institute of Medicine convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society's current and future needs. The resulting report provided several recommendations regarding the oversight and mechanisms of GME funding, including implementation of performance-based GME payments, but did not provide specific details about the content and development of metrics for these payments. To initiate a national conversation about performance-based GME funding, the authors asked: What should GME be held accountable for in exchange for public funding? In answer to this question, the authors propose 17 potential performance-based metrics for GME funding that could inform future funding decisions. Eight of the metrics are described as exemplars to add context and to help readers obtain a deeper understanding of the inherent complexities of performance-based GME funding. The authors also describe considerations and precautions for metric implementation.

  4. The importance of metrics for evaluating scientific performance

    NASA Astrophysics Data System (ADS)

    Miyakawa, Tsuyoshi

    Evaluation of scientific performance is a major factor that determines the behavior of both individual researchers and the academic institutes to which they belong. Because the number of researchers heavily outweighs the number of available research posts, and the competitive funding accounts for an ever-increasing proportion of research budget, some objective indicators of research performance have gained recognition for increasing transparency and openness. It is common practice to use metrics and indices to evaluate a researcher's performance or the quality of their grant applications. Such measures include the number of publications, the number of times these papers are cited and, more recently, the h-index, which measures the number of highly-cited papers the researcher has written. However, academic institutions and funding agencies in Japan have been rather slow to adopt such metrics. In this article, I will outline some of the currently available metrics, and discuss why we need to use such objective indicators of research performance more often in Japan. I will also discuss how to promote the use of metrics and what we should keep in mind when using them, as well as their potential impact on the research community in Japan.

  5. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  6. Variability of patient spine education by Internet search engine.

    PubMed

    Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S

    2014-03-01

    Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.

  7. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

    PubMed

    Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

    2017-08-01

    This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Guidelines for evaluating performance of oyster habitat restoration

    USGS Publications Warehouse

    Baggett, Lesley P.; Powers, Sean P.; Brumbaugh, Robert D.; Coen, Loren D.; DeAngelis, Bryan M.; Greene, Jennifer K.; Hancock, Boze T.; Morlock, Summer M.; Allen, Brian L.; Breitburg, Denise L.; Bushek, David; Grabowski, Jonathan H.; Grizzle, Raymond E.; Grosholz, Edwin D.; LaPeyre, Megan K.; Luckenbach, Mark W.; McGraw, Kay A.; Piehler, Michael F.; Westby, Stephanie R.; zu Ermgassen, Philine S. E.

    2015-01-01

    Restoration of degraded ecosystems is an important societal goal, yet inadequate monitoring and the absence of clear performance metrics are common criticisms of many habitat restoration projects. Funding limitations can prevent adequate monitoring, but we suggest that the lack of accepted metrics to address the diversity of restoration objectives also presents a serious challenge to the monitoring of restoration projects. A working group with experience in designing and monitoring oyster reef projects was used to develop standardized monitoring metrics, units, and performance criteria that would allow for comparison among restoration sites and projects of various construction types. A set of four universal metrics (reef areal dimensions, reef height, oyster density, and oyster size–frequency distribution) and a set of three universal environmental variables (water temperature, salinity, and dissolved oxygen) are recommended to be monitored for all oyster habitat restoration projects regardless of their goal(s). In addition, restoration goal-based metrics specific to four commonly cited ecosystem service-based restoration goals are recommended, along with an optional set of seven supplemental ancillary metrics that could provide information useful to the interpretation of prerestoration and postrestoration monitoring data. Widespread adoption of a common set of metrics with standardized techniques and units to assess well-defined goals not only allows practitioners to gauge the performance of their own projects but also allows for comparison among projects, which is both essential to the advancement of the field of oyster restoration and can provide new knowledge about the structure and ecological function of oyster reef ecosystems.

  9. Low bandwidth robust controllers for flight

    NASA Technical Reports Server (NTRS)

    Biezad, Daniel J.; Chou, Hwei-Lan

    1992-01-01

    During the final reporting period (Jun. - Dec. 1992), analyses of the longitudinal and lateral flying qualities were made for propulsive-only flight control (POFC) of a Boeing 720 aircraft model. Performance resulting from compensators developed using Quantitative Feedback Theory (QFT) is documented and analyzed. This report is a first draft of a thesis to be presented by graduate student Hwei-Lan Chou. The final thesis will be presented to NASA when it is completed later this year. The latest landing metrics related to bandwidth criteria and based on the Neal-Smith approach to flying qualities prediction were used in developing performance criteria for the controllers. The compensator designs were tested on the NASA simulator and exhibited adequate performance for piloted flight. There was no significant impact of QFT on performance of the propulsive-only flight controllers in either the longitudinal or lateral modes of flight. This was attributed to the physical limits of thrust available and the engine rate of response, both of whiih severely limited the available bandwidth of the closed-loop system.

  10. Small difference in carcinogenic potency between GBP nanomaterials and GBP micromaterials.

    PubMed

    Gebel, Thomas

    2012-07-01

    Materials that can be described as respirable granular biodurable particles without known significant specific toxicity (GBP) show a common mode of toxicological action that is characterized by inflammation and carcinogenicity in chronic inhalation studies in the rat. This study was carried out to compare the carcinogenic potency of GBP nanomaterials (primary particle diameter 1-100 nm) to GBP micromaterials (primary particle diameter >100 nm) in a pooled approach. For this purpose, the positive GBP rat inhalation carcinogenicity studies have been evaluated. Inhalation studies on diesel engine emissions have also been included due to the fact that the mode of carcinogenic action is assumed to be the same. As it is currently not clear which dose metrics may best explain carcinogenic potency, different metrics have been considered. Cumulative exposure concentrations related to mass, surface area, and primary particle volume have been included as well as cumulative lung burden metrics related to mass, surface area, and primary particle volume. In total, 36 comparisons have been conducted. Including all dose metrics, GBP nanomaterials were 1.33- to 1.69-fold (mean values) and 1.88- to 3.54-fold (median values) more potent with respect to carcinogenicity than GBP micromaterials, respectively. Nine of these 36 comparisons showed statistical significance (p < 0.05, U test), all of which related to dose metrics based on particle mass. The maximum comparative potency factor obtained for one of these 9 dose metric comparisons based on particle mass was 4.71. The studies with diesel engine emissions did not have a major impact on the potency comparison. The average duration of the carcinogenicity studies with GBP nanomaterials was 4 months longer (median values 30 vs. 26 months) than the studies with GBP micromaterials, respectively. Tumor rates increase with age and lung tumors in the rat induced by GBP materials are known to appear late, that is, mainly after study durations longer than 24 months. Taking the different study durations into account, the real potency differences were estimated to be twofold lower than the relative potency factors identified. In conclusion, the chronic rat inhalation studies with GBP materials indicate that the difference in carcinogenic potency between GBP nanomaterials and GBP micromaterials is low can be described by a factor of 2-2.5 referring to the dose metrics mass concentration.

  11. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  12. Teaching Science in Engineering Freshman Class in Private University in Jordan

    NASA Astrophysics Data System (ADS)

    Hawarey, M. M.; Malkawi, M. I.

    2012-04-01

    A United Nations initiative for the Arab region that established and calculated National Intellectual Capital Index has shown that Jordan is the wealthiest Arab country in its National Human Capital Index (i.e. metrics: literacy rate, number of tertiary schools per capita, percentage of primary teachers with required qualifications, number of tertiary students per capita, cumulative tertiary graduates per capita, percentage of male grade 1 net intake, percentage of female grade 1 net intake) and National Market Capital Index (i.e. metrics: high-technology exports as a percentage of GDP, number of patents granted by USPTO per capita, number of meetings hosted per capita) despite its low ranking when it comes to National Financial Capital (i.e. metric: GDP per capita). The societal fabric in Jordan fully justifies this: the attention paid to education is extreme and sometimes is considered fanatic (e.g. marriage of a lot of couples needs to wait until both graduate from the university). Also, the low financial capital has forced a lot of people to become resourceful in order to provide decent living standard to their beloved ones. This reality is partially manifested in the sharp increase in the number of universities (i.e. 10 public and 20 private ones) relative to a population of around 6.5 million. Once in an engineering freshman classroom, it is totally up to the lecturers teaching science in private Jordanian universities to excel in their performance and find a way to inject the needed scientific concepts into the students' brains. For that, clips from movies that are relevant to the topics and truthful in their scientific essence have been tested (e.g. to explain the pressure on humans due to rapidly increasing "g" force, a clip from the movie "Armageddon" proved very helpful to Physics 101 students, and entertaining at the same time), plastic toys have also been tested to illustrate simple physical concepts to the same students (e.g. a set called The Junior Engineer covers vast concepts relevant to Newton's Laws and Work-Energy Theorem, while originally aimed at 3-year old kids), and YouTube has become so rich in it scientific content that it has not been hard to find any experiment or simulation there so that the students connect the dry blackboard and chalk to real life. As freshmen are still immature and sensing their way through, wondering if they will be able to get the title of Engineer or not, the usage of such familiar mediums and tools such as movies, toys, videos and simulations to illustrate basics to them has proved efficient and is regarded as an ideal ice-breaker towards a challenging journey of engineering classes. As long as the scientific content is not compromised, we believe that more mediums should be tested. This paper will highlight these affairs.

  13. Characterizing the impact of spatiotemporal variations in stormwater infrastructure on hydrologic conditions

    NASA Astrophysics Data System (ADS)

    Jovanovic, T.; Mejia, A.; Hale, R. L.; Gironas, J. A.

    2015-12-01

    Urban stormwater infrastructure design has evolved in time, reflecting changes in stormwater policy and regulations, and in engineering design. This evolution makes urban basins heterogeneous socio-ecological-technological systems. We hypothesize that this heterogeneity creates unique impact trajectories in time and impact hotspots in space within and across cities. To explore this, we develop and implement a network hydro-engineering modeling framework based on high-resolution digital elevation and stormwater infrastructure data. The framework also accounts for climatic, soils, land use, and vegetation conditions in an urban basin, thus making it useful to study the impacts of stormwater infrastructure across cities. Here, to evaluate the framework, we apply it to urban basins in the metropolitan areas of Phoenix, Arizona. We use it to estimate different metrics to characterize the storm-event hydrologic response. We estimate both traditional metrics (e.g., peak flow, time to peak, and runoff volume) as well as new metrics (e.g., basin-scale dispersion mechanisms). We also use the dispersion mechanisms to assess the scaling characteristics of urban basins. Ultimately, we find that the proposed framework can be used to understand and characterize the impacts associated with stormwater infrastructure on hydrologic conditions within a basin. Additionally, we find that the scaling approach helps in synthesizing information but it requires further validation using additional urban basins.

  14. Measuring the Rate of Change in Sea Level and Its Adherence to USACE Sea Level Rise Planning Scenarios Using Timeseries Metrics

    NASA Astrophysics Data System (ADS)

    White, K. D.; Huang, N.; Huber, M.; Veatch, W.; Moritz, H.; Obrien, P. S.; Friedman, D.

    2017-12-01

    In 2013, the United States Army Corps of Engineers (USACE) issued guidance for all Civil Works activities to incorporate the effects of sea level change as described in three distinct planning scenarios.[1] These planning scenarios provided a useful framework to incorporate these effects into Civil Works activities, but required the manual calculation of these scenarios for a given gage and set of datum. To address this need, USACE developed the Sea Level Change Curve Calculator (SLCCC) in 2014 which provided a "simple, web-based tool to provide repeatable analytical results."[2]USACE has been developing a successor to the SLCCC application which retains the same, intuitive functionality to calculate these planning scenarios, but it also allows the comparison of actual sea level change between 1992 and today against the projections, and builds on the user's ability to understand the rate of change using a variety of timeseries metrics (e.g. moving averages, trends) and related visualizations. These new metrics help both illustrate and measure the complexity and nuances of sea level change. [1] ER 1000-2-8162. http://www.publications.usace.army.mil/Portals/76/Publications/EngineerRegulations/ER_1100-2-8162.pdf. [2] SLCC Manual. http://www.corpsclimate.us/docs/SLC_Calculator_Manual_2014_88.pdf.

  15. DOD Rapid Innovation Program: Some Technologies Have Transitioned to Military Users, but Steps Can Be Taken to Improve Program Metrics and Outcomes

    DTIC Science & Technology

    2015-05-01

    Abbreviations ASD /R&E Assistant Secretary of Defense for Research and Engineering BAA Broad Agency Announcement DOD Department of Defense...solicitation of proposals; • merit-based selection of the most promising cost-effective proposals for funding through contracts, cooperative ...representatives appointed by the military service acquisition executives, Assistant Secretary of Defense for Research and Engineering ( ASD /R&E), and

  16. Millimeter wave sensor requirements for maritime small craft identification

    NASA Astrophysics Data System (ADS)

    Krapels, Keith; Driggers, Ronald G.; Garcia, Jose; Boettcher, Evelyn; Prather, Dennis; Schuetz, Chrisopher; Samluk, Jesse; Stein, Lee; Kiser, William; Visnansky, Andrew; Grata, Jeremy; Wikner, David; Harris, Russ

    2009-09-01

    Passive millimeter wave (mmW) imagers have improved in terms of resolution sensitivity and frame rate. Currently, the Office of Naval Research (ONR), along with the US Army Research, Development and Engineering Command, Communications Electronics Research Development and Engineering Center (RDECOM CERDEC) Night Vision and Electronic Sensor Directorate (NVESD), are investigating the current state-of-the-art of mmW imaging systems. The focus of this study was the performance of mmW imaging systems for the task of small watercraft / boat identification field performance. First mmW signatures were collected. This consisted of a set of eight small watercrafts; at 5 different aspects, during the daylight hours over a 48 hour period in the spring of 2008. Target characteristics were measured and characteristic dimension, signatures, and Root Sum Squared of Target's Temperature (RRSΔT) tabulated. Then an eight-alternative, forced choice (8AFC) human perception experiment was developed and conducted at NVESD. The ability of observers to discriminate between small watercraft was quantified. Next, the task difficulty criterion, V50, was quantified by applying this data to NVESD's target acquisition models using the Targeting Task Performance (TTP) metric. These parameters can be used to evaluate sensor field performance for Anti-Terrorism / Force Protection (AT/FP) and navigation tasks for the U.S. Navy, as well as for design and evaluation of imaging passive mmW sensors for both the U.S. Navy and U.S. Coast Guard.

  17. Comparison of measured electron energy spectra for six matched, radiotherapy accelerators.

    PubMed

    McLaughlin, David J; Hogstrom, Kenneth R; Neck, Daniel W; Gibbons, John P

    2018-05-01

    This study compares energy spectra of the multiple electron beams of individual radiotherapy machines, as well as the sets of spectra across multiple matched machines. Also, energy spectrum metrics are compared with central-axis percent depth-dose (PDD) metrics. A lightweight, permanent magnet spectrometer was used to measure energy spectra for seven electron beams (7-20 MeV) on six matched Elekta Infinity accelerators with the MLCi2 treatment head. PDD measurements in the distal falloff region provided R 50 and R 80-20 metrics in Plastic Water ® , which correlated with energy spectrum metrics, peak mean energy (PME) and full-width at half maximum (FWHM). Visual inspection of energy spectra and their metrics showed whether beams on single machines were properly tuned, i.e., FWHM is expected to increase and peak height decrease monotonically with increased PME. Also, PME spacings are expected to be approximately equal for 7-13 MeV beams (0.5-cm R 90 spacing) and for 13-16 MeV beams (1.0-cm R 90 spacing). Most machines failed these expectations, presumably due to tolerances for initial beam matching (0.05 cm in R 90 ; 0.10 cm in R 80-20 ) and ongoing quality assurance (0.2 cm in R 50 ). Also, comparison of energy spectra or metrics for a single beam energy (six machines) showed outlying spectra. These variations in energy spectra provided ample data spread for correlating PME and FWHM with PDD metrics. Least-squares fits showed that R 50 and R 80-20 varied linearly and supralinearly with PME, respectively; however, both suggested a secondary dependence on FWHM. Hence, PME and FWHM could serve as surrogates for R 50 and R 80-20 for beam tuning by the accelerator engineer, possibly being more sensitive (e.g., 0.1 cm in R 80-20 corresponded to 2.0 MeV in FWHM). Results of this study suggest a lightweight, permanent magnet spectrometer could be a useful beam-tuning instrument for the accelerator engineer to (a) match electron beams prior to beam commissioning, (b) tune electron beams for the duration of their clinical use, and (c) provide estimates of PDD metrics following machine maintenance. However, a real-time version of the spectrometer is needed to be practical. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. A web-based rapid assessment tool for production publishing solutions

    NASA Astrophysics Data System (ADS)

    Sun, Tong

    2010-02-01

    Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.

  19. Experimental Investigation of Actuators for Flow Control in Inlet Ducts

    NASA Astrophysics Data System (ADS)

    Vaccaro, John; Elimelech, Yossef; Amitay, Michael

    2010-11-01

    Attractive to aircraft designers are compact inlets, which implement curved flow paths to the compressor face. These curved flow paths could be employed for multiple reasons. One of which is to connect the air intake to the engine embedded in the aircraft body. A compromise must be made between the compactness of the inlet and its aerodynamic performance. The aerodynamic purpose of inlets is to decelerate the oncoming flow before reaching the engine while minimizing total pressure loss, unsteadiness and distortion. Low length-to-diameter ratio inlets have a high degree of curvature, which inevitably causes flow separation and secondary flows. Currently, the length of the propulsion system is constraining the overall size of Unmanned Air Vehicles (UAVs), thus, smaller more efficient aircrafts could be realized if the propulsion system could be shortened. Therefore, active flow control is studied in a compact (L/D=1.5) inlet to improve performance metrics. Actuation from a spanwise varying coanda type ejector actuator and a hybrid coanda type ejector / vortex generator jet actuator is investigated. Special attention will be given to the pressure recovery at the AIP along with unsteady pressure signatures along the inlet surface and at the AIP.

  20. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  1. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  2. Light-optimized growth of cyanobacterial cultures: Growth phases and productivity of biomass and secreted molecules in light-limited batch growth.

    PubMed

    Clark, Ryan L; McGinley, Laura L; Purdy, Hugh M; Korosh, Travis C; Reed, Jennifer L; Root, Thatcher W; Pfleger, Brian F

    2018-03-27

    Cyanobacteria are photosynthetic microorganisms whose metabolism can be modified through genetic engineering for production of a wide variety of molecules directly from CO 2 , light, and nutrients. Diverse molecules have been produced in small quantities by engineered cyanobacteria to demonstrate the feasibility of photosynthetic biorefineries. Consequently, there is interest in engineering these microorganisms to increase titer and productivity to meet industrial metrics. Unfortunately, differing experimental conditions and cultivation techniques confound comparisons of strains and metabolic engineering strategies. In this work, we discuss the factors governing photoautotrophic growth and demonstrate nutritionally replete conditions in which a model cyanobacterium can be grown to stationary phase with light as the sole limiting substrate. We introduce a mathematical framework for understanding the dynamics of growth and product secretion in light-limited cyanobacterial cultures. Using this framework, we demonstrate how cyanobacterial growth in differing experimental systems can be easily scaled by the volumetric photon delivery rate using the model organisms Synechococcus sp. strain PCC7002 and Synechococcus elongatus strain UTEX2973. We use this framework to predict scaled up growth and product secretion in 1L photobioreactors of two strains of Synechococcus PCC7002 engineered for production of l-lactate or L-lysine. The analytical framework developed in this work serves as a guide for future metabolic engineering studies of cyanobacteria to allow better comparison of experiments performed in different experimental systems and to further investigate the dynamics of growth and product secretion. Copyright © 2018 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  3. Automatic Setting Procedure for Exoskeleton-Assisted Overground Gait: Proof of Concept on Stroke Population

    PubMed Central

    Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra

    2018-01-01

    Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial—it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context. PMID:29615890

  4. Automatic Setting Procedure for Exoskeleton-Assisted Overground Gait: Proof of Concept on Stroke Population.

    PubMed

    Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra

    2018-01-01

    Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial-it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context.

  5. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    NASA Astrophysics Data System (ADS)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of this work is the structure of the framework and what it tells future researchers in terms of where the gaps and limitations exist for developing a better framework. It also identifies metrics that can now be collected as part of future validation efforts for the model.

  6. GPS Device Testing Based on User Performance Metrics

    DOT National Transportation Integrated Search

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  7. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE PAGES

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.; ...

    2017-09-20

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  8. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  9. Relevance of motion-related assessment metrics in laparoscopic surgery.

    PubMed

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  10. Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs. Research Report. ETS RR-17-20

    ERIC Educational Resources Information Center

    Ramanarayanan, Vikram; Lange, Patrick; Evanini, Keelan; Molloy, Hillary; Tsuprun, Eugene; Qian, Yao; Suendermann-Oeft, David

    2017-01-01

    Predicting and analyzing multimodal dialog user experience (UX) metrics, such as overall call experience, caller engagement, and latency, among other metrics, in an ongoing manner is important for evaluating such systems. We investigate automated prediction of multiple such metrics collected from crowdsourced interactions with an open-source,…

  11. JPDO Portfolio Analysis of NextGen

    DTIC Science & Technology

    2009-09-01

    runways. C. Metrics The JPDO Interagency Portfolio & Systems Analysis ( IPSA ) division continues to coordinate, develop, and refine the metrics and...targets associated with the NextGen initiatives with the partner agencies & stakeholder communities. IPSA has formulated a set of top-level metrics as...metrics are calculated from system performance measures that constitute outputs of the American Institute of Aeronautics and Astronautics 8 IPSA

  12. Light Water Reactor Sustainability Program Operator Performance Metrics for Control Room Modernization: A Practical Guide for Early Design Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Boring; Roger Lew; Thomas Ulrich

    2014-03-01

    As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less

  13. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  14. Performance metrics for the assessment of satellite data products: an ocean color case study

    PubMed Central

    Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy

    2018-01-01

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296

  15. Alternative Fuels DISI Engine Research ? Autoignition Metrics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjoberg, Carl Magnus Goran; Vuilleumier, David

    Improved engine efficiency is required to comply with future fuel economy standards. Alternative fuels have the potential to enable more efficient engines while addressing concerns about energy security. This project contributes to the science base needed by industry to develop highly efficient direct injection spark igniton (DISI) engines that also beneficially exploit the different properties of alternative fuels. Here, the emphasis is on quantifying autoignition behavior for a range of spark-ignited engine conditions, including directly injected boosted conditions. The efficiency of stoichiometrically operated spark ignition engines is often limited by fuel-oxidizer end-gas autoignition, which can result in engine knock. Amore » fuel’s knock resistance is assessed empirically by the Research Octane Number (RON) and Motor Octane Number (MON) tests. By clarifying how these two tests relate to the autoignition behavior of conventional and alternative fuel formulations, fuel design guidelines for enhanced engine efficiency can be developed.« less

  16. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  17. New Challenges for Intervertebral Disc Treatment Using Regenerative Medicine

    PubMed Central

    Masuda, Koichi

    2010-01-01

    The development of tissue engineering therapies for the intervertebral disc is challenging due to ambiguities of disease and pain mechanisms in patients, and lack of consensus on preclinical models for safety and efficacy testing. Although the issues associated with model selection for studying orthopedic diseases or treatments have been discussed often, the multifaceted challenges associated with developing intervertebral disc tissue engineering therapies require special discussion. This review covers topics relevant to the clinical translation of tissue-engineered technologies: (1) the unmet clinical need, (2) appropriate models for safety and efficacy testing, (3) the need for standardized model systems, and (4) the translational pathways leading to a clinical trial. For preclinical evaluation of new therapies, we recommend establishing biologic plausibility of efficacy and safety using models of increasing complexity, starting with cell culture, small animals (rats and rabbits), and then large animals (goat and minipig) that more closely mimic nutritional, biomechanical, and surgical realities of human application. The use of standardized and reproducible experimental procedures and outcome measures is critical for judging relative efficacy. Finally, success will hinge on carefully designed clinical trials with well-defined patient selection criteria, gold-standard controls, and objective outcome metrics to assess performance in the early postoperative period. PMID:19903086

  18. Using Quality Attributes to Bridge Systems Engineering Gaps : A Juno Ground Data Systems Case Study

    NASA Technical Reports Server (NTRS)

    Dubon, Lydia P.; Jackson, Maddalena M.; Thornton, Marla S.

    2012-01-01

    The Juno Mission to Jupiter is the second mission selected by the NASA New Frontiers Program. Juno launched August 2011 and will reach Jupiter July 2016. Juno's payload system is composed of nine instruments plus a gravity science experiment. One of the primary functions of the Juno Ground Data System (GDS) is the assembly and distribution of the CFDP (CCSDS File Delivery Protocol) product telemetry, also referred to as raw science data, for eight out of the nine instruments. The GDS accomplishes this with the Instrument Data Pipeline (IDP). During payload integration, the first attempt to exercise the IDP in a flight like manner revealed that although the functional requirements were well understood, the system was unable to meet latency requirements with the as-is heritage design. A systems engineering gap emerged between Juno instrument data delivery requirements and the assumptions behind the heritage flight-ground interactions. This paper describes the use of quality attributes to measure and overcome this gap by introducing a new systems engineering activity, and a new monitoring service architecture that successfully delivered the performance metrics needed to validate Juno IDP.

  19. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  20. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  1. Defining Exercise Performance Metrics for Flight Hardware Development

    NASA Technical Reports Server (NTRS)

    Beyene, Nahon M.

    2004-01-01

    The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space exploration.

  2. Snow removal performance metrics : final report.

    DOT National Transportation Integrated Search

    2017-05-01

    This document is the final report for the Clear Roads project entitled Snow Removal Performance Metrics. The project team was led by researchers at Washington State University on behalf of Clear Roads, an ongoing pooled fund research effort focused o...

  3. Single-Point Mutation with a Rotamer Library Toolkit: Toward Protein Engineering.

    PubMed

    Pottel, Joshua; Moitessier, Nicolas

    2015-12-28

    Protein engineers have long been hard at work to harness biocatalysts as a natural source of regio-, stereo-, and chemoselectivity in order to carry out chemistry (reactions and/or substrates) not previously achieved with these enzymes. The extreme labor demands and exponential number of mutation combinations have induced computational advances in this domain. The first step in our virtual approach is to predict the correct conformations upon mutation of residues (i.e., rebuilding side chains). For this purpose, we opted for a combination of molecular mechanics and statistical data. In this work, we have developed automated computational tools to extract protein structural information and created conformational libraries for each amino acid dependent on a variable number of parameters (e.g., resolution, flexibility, secondary structure). We have also developed the necessary tool to apply the mutation and optimize the conformation accordingly. For side-chain conformation prediction, we obtained overall average root-mean-square deviations (RMSDs) of 0.91 and 1.01 Å for the 18 flexible natural amino acids within two distinct sets of over 3000 and 1500 side-chain residues, respectively. The commonly used dihedral angle differences were also evaluated and performed worse than the state of the art. These two metrics are also compared. Furthermore, we generated a family-specific library for kinases that produced an average 2% lower RMSD upon side-chain reconstruction and a residue-specific library that yielded a 17% improvement. Ultimately, since our protein engineering outlook involves using our docking software, Fitted/Impacts, we applied our mutation protocol to a benchmarked data set for self- and cross-docking. Our side-chain reconstruction does not hinder our docking software, demonstrating differences in pose prediction accuracy of approximately 2% (RMSD cutoff metric) for a set of over 200 protein/ligand structures. Similarly, when docking to a set of over 100 kinases, side-chain reconstruction (using both general and biased conformation libraries) had minimal detriment to the docking accuracy.

  4. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  5. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  6. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  7. Identify and Quantify the Mechanistic Sources of Sensor Performance Variation Between Individual Sensors SN1 and SN2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Aaron A.; Baldwin, David L.; Cinson, Anthony D.

    2014-08-06

    This Technical Letter Report satisfies the M3AR-14PN2301022 milestone, and is focused on identifying and quantifying the mechanistic sources of sensor performance variation between individual 22-element, linear phased-array sensor prototypes, SN1 and SN2. This effort constitutes an iterative evolution that supports the longer term goal of producing and demonstrating a pre-manufacturing prototype ultrasonic probe that possesses the fundamental performance characteristics necessary to enable the development of a high-temperature sodium-cooled fast reactor inspection system. The scope of the work for this portion of the PNNL effort conducted in FY14 includes performing a comparative evaluation and assessment of the performance characteristics of themore » SN1 and SN2 22 element PA-UT probes manufactured at PNNL. Key transducer performance parameters, such as sound field dimensions, resolution capabilities, frequency response, and bandwidth are used as a metric for the comparative evaluation and assessment of the SN1 and SN2 engineering test units.« less

  8. Artificial General Intelligence: Concept, State of the Art, and Future Prospects

    NASA Astrophysics Data System (ADS)

    Goertzel, Ben

    2014-12-01

    In recent years broad community of researchers has emerged, focusing on the original ambitious goals of the AI field - the creation and study of software or hardware systems with general intelligence comparable to, and ultimately perhaps greater than, that of human beings. This paper surveys this diverse community and its progress. Approaches to defining the concept of Artificial General Intelligence (AGI) are reviewed including mathematical formalisms, engineering, and biology inspired perspectives. The spectrum of designs for AGI systems includes systems with symbolic, emergentist, hybrid and universalist characteristics. Metrics for general intelligence are evaluated, with a conclusion that, although metrics for assessing the achievement of human-level AGI may be relatively straightforward (e.g. the Turing Test, or a robot that can graduate from elementary school or university), metrics for assessing partial progress remain more controversial and problematic.

  9. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.

  10. Multi-topic assignment for exploratory navigation of consumer health information in NetWellness using formal concept analysis.

    PubMed

    Cui, Licong; Xu, Rong; Luo, Zhihui; Wentz, Susan; Scarberry, Kyle; Zhang, Guo-Qiang

    2014-08-03

    Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F₁ measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness' original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics.

  11. Multi-topic assignment for exploratory navigation of consumer health information in NetWellness using formal concept analysis

    PubMed Central

    2014-01-01

    Background Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. Methods We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Results Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F1 measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness’ original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Conclusion Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics. PMID:25086916

  12. The CREST Simulation Development Process: Training the Next Generation.

    PubMed

    Sweet, Robert M

    2017-04-01

    The challenges of training and assessing endourologic skill have driven the development of new training systems. The Center for Research in Education and Simulation Technologies (CREST) has developed a team and a methodology to facilitate this development process. Backwards design principles were applied. A panel of experts first defined desired clinical and educational outcomes. Outcomes were subsequently linked to learning objectives. Gross task deconstruction was performed, and the primary domain was classified as primarily involving decision-making, psychomotor skill, or communication. A more detailed cognitive task analysis was performed to elicit and prioritize relevant anatomy/tissues, metrics, and errors. Reference anatomy was created using a digital anatomist and clinician working off of a clinical data set. Three dimensional printing can facilitate this process. When possible, synthetic or virtual tissue behavior and textures were recreated using data derived from human tissue. Embedded sensors/markers and/or computer-based systems were used to facilitate the collection of objective metrics. A learning Verification and validation occurred throughout the engineering development process. Nine endourology-relevant training systems were created by CREST with this approach. Systems include basic laparoscopic skills (BLUS), vesicourethral anastomosis, pyeloplasty, cystoscopic procedures, stent placement, rigid and flexible ureteroscopy, GreenLight PVP (GL Sim), Percutaneous access with C-arm (CAT), Nephrolithotomy (NLM), and a vascular injury model. Mixed modalities have been used, including "smart" physical models, virtual reality, augmented reality, and video. Substantial validity evidence for training and assessment has been collected on systems. An open source manikin-based modular platform is under development by CREST with the Department of Defense that will unify these and other commercial task trainers through the common physiology engine, learning management system, standard data connectors, and standards. Using the CREST process has and will ensure that the systems we create meet the needs of training and assessing endourologic skills.

  13. Evaluation of BLAST-based edge-weighting metrics used for homology inference with the Markov Clustering algorithm.

    PubMed

    Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F

    2015-07-10

    Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.

  14. Accounting for the phase, spatial frequency and orientation demands of the task improves metrics based on the visual Strehl ratio.

    PubMed

    Young, Laura K; Love, Gordon D; Smithson, Hannah E

    2013-09-20

    Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Reliability and Probabilistic Risk Assessment - How They Play Together

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Stutts, Richard G.; Zhaofeng, Huang

    2015-01-01

    PRA methodology is one of the probabilistic analysis methods that NASA brought from the nuclear industry to assess the risk of LOM, LOV and LOC for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability and statistical data to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: What can go wrong? How likely is it? What is the severity of the degradation? Since 1986, NASA, along with industry partners, has conducted a number of PRA studies to predict the overall launch vehicles risks. Planning Research Corporation conducted the first of these studies in 1988. In 1995, Science Applications International Corporation (SAIC) conducted a comprehensive PRA study. In July 1996, NASA conducted a two-year study (October 1996 - September 1998) to develop a model that provided the overall Space Shuttle risk and estimates of risk changes due to proposed Space Shuttle upgrades. After the Columbia accident, NASA conducted a PRA on the Shuttle External Tank (ET) foam. This study was the most focused and extensive risk assessment that NASA has conducted in recent years. It used a dynamic, physics-based, integrated system analysis approach to understand the integrated system risk due to ET foam loss in flight. Most recently, a PRA for Ares I launch vehicle has been performed in support of the Constellation program. Reliability, on the other hand, addresses the loss of functions. In a broader sense, reliability engineering is a discipline that involves the application of engineering principles to the design and processing of products, both hardware and software, for meeting product reliability requirements or goals. It is a very broad design-support discipline. It has important interfaces with many other engineering disciplines. Reliability as a figure of merit (i.e. the metric) is the probability that an item will perform its intended function(s) for a specified mission profile. In general, the reliability metric can be calculated through the analyses using reliability demonstration and reliability prediction methodologies. Reliability analysis is very critical for understanding component failure mechanisms and in identifying reliability critical design and process drivers. The following sections discuss the PRA process and reliability engineering in detail and provide an application where reliability analysis and PRA were jointly used in a complementary manner to support a Space Shuttle flight risk assessment.

  16. Engineering technology for networks

    NASA Technical Reports Server (NTRS)

    Paul, Arthur S.; Benjamin, Norman

    1991-01-01

    Space Network (SN) modeling and evaluation are presented. The following tasks are included: Network Modeling (developing measures and metrics for SN, modeling of the Network Control Center (NCC), using knowledge acquired from the NCC to model the SNC, and modeling the SN); and Space Network Resource scheduling.

  17. Bayesian performance metrics and small system integration in recent homeland security and defense applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas

    2010-04-01

    In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.

  18. Performance of the METRIC model in estimating evapotranspiration fluxes over an irrigated field in Saudi Arabia using Landsat-8 images

    NASA Astrophysics Data System (ADS)

    Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.

    2017-12-01

    Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.

  19. Objective measurement of complex multimodal and multidimensional display formats: a common metric for predicting format effectiveness

    NASA Astrophysics Data System (ADS)

    Marshak, William P.; Darkow, David J.; Wesler, Mary M.; Fix, Edward L.

    2000-08-01

    Computer-based display designers have more sensory modes and more dimensions within sensory modality with which to encode information in a user interface than ever before. This elaboration of information presentation has made measurement of display/format effectiveness and predicting display/format performance extremely difficult. A multivariate method has been devised which isolates critical information, physically measures its signal strength, and compares it with other elements of the display, which act like background noise. This common Metric relates signal-to-noise ratios (SNRs) within each stimulus dimension, then combines SNRs among display modes, dimensions and cognitive factors can predict display format effectiveness. Examples with their Common Metric assessment and validation in performance will be presented along with the derivation of the metric. Implications of the Common Metric in display design and evaluation will be discussed.

  20. Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media

    NASA Astrophysics Data System (ADS)

    Park, Ju-Won; Kim, JongWon

    2004-10-01

    As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.

  1. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    NASA Astrophysics Data System (ADS)

    Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey

    2015-06-01

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure-activity relationships.

  2. Individual reactions to stress predict performance during a critical aviation incident.

    PubMed

    Vine, Samuel J; Uiga, Liis; Lavric, Aureliu; Moore, Lee J; Tsaneva-Atanasova, Krasimira; Wilson, Mark R

    2015-01-01

    Understanding the influence of stress on human performance is of theoretical and practical importance. An individual's reaction to stress predicts their subsequent performance; with a "challenge" response to stress leading to better performance than a "threat" response. However, this contention has not been tested in truly stressful environments with highly skilled individuals. Furthermore, the effect of challenge and threat responses on attentional control during visuomotor tasks is poorly understood. Thus, this study aimed to examine individual reactions to stress and their influence on attentional control, among a cohort of commercial pilots performing a stressful flight assessment. Sixteen pilots performed an "engine failure on take-off" scenario, in a high-fidelity flight simulator. Reactions to stress were indexed via self-report; performance was assessed subjectively (flight instructor assessment) and objectively (simulator metrics); gaze behavior data were captured using a mobile eye tracker, and measures of attentional control were subsequently calculated (search rate, stimulus driven attention, and entropy). Hierarchical regression analyses revealed that a threat response was associated with poorer performance and disrupted attentional control. The findings add to previous research showing that individual reactions to stress influence performance and shed light on the processes through which stress influences performance.

  3. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    PubMed

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  4. Are we allowing impact factor to have too much impact: The need to reassess the process of academic advancement in pediatric cardiology?

    PubMed

    Loomba, Rohit S; Anderson, Robert H

    2018-03-01

    Impact factor has been used as a metric by which to gauge scientific journals for several years. A metric meant to describe the performance of a journal overall, impact factor has also become a metric used to gauge individual performance as well. This has held true in the field of pediatric cardiology where many divisions utilize impact factor of journals that an individual has published in to help determine the individual's academic achievement. This subsequently can impact the individual's promotion through the academic ranks. We review the purpose of impact factor, its strengths and weaknesses, discuss why impact factor is not a fair metric to apply to individuals, and offer alternative means by which to gauge individual performance for academic promotion. © 2018 Wiley Periodicals, Inc.

  5. Sustainability Metrics of a Small Scale Turbojet Engine

    NASA Astrophysics Data System (ADS)

    Ekici, Selcuk; Sohret, Yasin; Coban, Kahraman; Altuntas, Onder; Karakoc, T. Hikmet

    2018-05-01

    Over the last decade, sustainable energy consumption has attracted the attention of scientists and researchers. The current paper presents sustainability indicators of a small scale turbojet engine, operated on micro-aerial vehicles, for discussion of the sustainable development of the aviation industry from a different perspective. Experimental data was obtained from an engine at full power load and utilized to conduct an exergy-based sustainability analysis. Exergy efficiency, waste exergy ratio, recoverable exergy ratio, environmental effect factor, exergy destruction factor and exergetic sustainability index are evaluated as exergetic sustainability indicators of the turbojet engine under investigation in the current study. The exergy efficiency of the small scale turbojet engine is calculated as 27.25 % whereas the waste exergy ratio, the exergy destruction factor and the sustainability index of the engine are found to be 0.9756, 0.5466 and 0.2793, respectively.

  6. Sharpening the focus on occupational safety and health in nanotechnology.

    PubMed

    Schulte, Paul; Geraci, Charles; Zumwalde, Ralph; Hoover, Mark; Castranova, Vincent; Kuempel, Eileen; Murashov, Vladimir; Vainio, Harri; Savolainen, Kai

    2008-12-01

    Increasing numbers of workers are involved with the production, use, distribution, and disposal of nanomaterials. At the same time, there is a growing number of reports of adverse biological effects of engineered nanoparticles in test systems. It is useful, at this juncture, to identify critical questions that will help address knowledge gaps concerning the potential occupational hazards of these materials. The questions address (i) hazard classification of engineered nanoparticles, (ii) exposure metrics, (iii) the actual exposures to the different engineered nanoparticles in the workplace, (iv) the limits of engineering controls and personal protective equipment with respect to engineered nanoparticles, (v) the kinds of surveillance programs that may be required at workplaces to protect potentially exposed workers, (vi) whether exposure registers should be established for workers potentially exposed to engineered nanoparticles, and, (vii) whether engineered nanoparticles should be treated as "new" substances and evaluated for safety and hazards?

  7. Evaluating true BCI communication rate through mutual information and language models.

    PubMed

    Speier, William; Arnold, Corey; Pouratian, Nader

    2013-01-01

    Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.

  8. On Railroad Tank Car Puncture Performance: Part II - Estimating Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the second in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perfor...

  9. Application of Sigma Metrics Analysis for the Assessment and Modification of Quality Control Program in the Clinical Chemistry Laboratory of a Tertiary Care Hospital.

    PubMed

    Iqbal, Sahar; Mustansar, Tazeen

    2017-03-01

    Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.

  10. Validation of a Quality Management Metric

    DTIC Science & Technology

    2000-09-01

    quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback

  11. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  12. Modeling student success in engineering education

    NASA Astrophysics Data System (ADS)

    Jin, Qu

    In order for the United States to maintain its global competitiveness, the long-term success of our engineering students in specific courses, programs, and colleges is now, more than ever, an extremely high priority. Numerous studies have focused on factors that impact student success, namely academic performance, retention, and/or graduation. However, there are only a limited number of works that have systematically developed models to investigate important factors and to predict student success in engineering. Therefore, this research presents three separate but highly connected investigations to address this gap. The first investigation involves explaining and predicting engineering students' success in Calculus I courses using statistical models. The participants were more than 4000 first-year engineering students (cohort years 2004 - 2008) who enrolled in Calculus I courses during the first semester in a large Midwestern university. Predictions from statistical models were proposed to be used to place engineering students into calculus courses. The success rates were improved by 12% in Calculus IA using predictions from models developed over traditional placement method. The results showed that these statistical models provided a more accurate calculus placement method than traditional placement methods and help improve success rates in those courses. In the second investigation, multi-outcome and single-outcome neural network models were designed to understand and to predict first-year retention and first-year GPA of engineering students. The participants were more than 3000 first year engineering students (cohort years 2004 - 2005) enrolled in a large Midwestern university. The independent variables include both high school academic performance factors and affective factors measured prior to entry. The prediction performances of the multi-outcome and single-outcome models were comparable. The ability to predict cumulative GPA at the end of an engineering student's first year of college was about a half of a grade point for both models. The predictors of retention and cumulative GPA while being similar differ in that high school academic metrics play a more important role in predicting cumulative GPA with the affective measures playing a more important role in predicting retention. In the last investigation, multi-outcome neural network models were used to understand and to predict engineering students' retention, GPA, and graduation from entry to departure. The participants were more than 4000 engineering students (cohort years 2004 - 2006) enrolled in a large Midwestern university. Different patterns of important predictors were identified for GPA, retention, and graduation. Overall, this research explores the feasibility of using modeling to enhance a student's educational experience in engineering. Student success modeling was used to identify the most important cognitive and affective predictors for a student's first calculus course retention, GPA, and graduation. The results suggest that the statistical modeling methods have great potential to assist decision making and help ensure student success in engineering education.

  13. Control Limits for Building Energy End Use Based on Engineering Judgment, Frequency Analysis, and Quantile Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henze, Gregor P.; Pless, Shanti; Petersen, Anya

    2014-02-01

    Approaches are needed to continuously characterize the energy performance of commercial buildings to allow for (1) timely response to excess energy use by building operators; and (2) building occupants to develop energy awareness and to actively engage in reducing energy use. Energy information systems, often involving graphical dashboards, are gaining popularity in presenting energy performance metrics to occupants and operators in a (near) real-time fashion. Such an energy information system, called Building Agent, has been developed at NREL and incorporates a dashboard for public display. Each building is, by virtue of its purpose, location, and construction, unique. Thus, assessing buildingmore » energy performance is possible only in a relative sense, as comparison of absolute energy use out of context is not meaningful. In some cases, performance can be judged relative to average performance of comparable buildings. However, in cases of high-performance building designs, such as NREL's Research Support Facility (RSF) discussed in this report, relative performance is meaningful only when compared to historical performance of the facility or to a theoretical maximum performance of the facility as estimated through detailed building energy modeling.« less

  14. Curvature, metric and parametrization of origami tessellations: theory and application to the eggbox pattern.

    PubMed

    Nassar, H; Lebée, A; Monasse, L

    2017-01-01

    Origami tessellations are particular textured morphing shell structures. Their unique folding and unfolding mechanisms on a local scale aggregate and bring on large changes in shape, curvature and elongation on a global scale. The existence of these global deformation modes allows for origami tessellations to fit non-trivial surfaces thus inspiring applications across a wide range of domains including structural engineering, architectural design and aerospace engineering. The present paper suggests a homogenization-type two-scale asymptotic method which, combined with standard tools from differential geometry of surfaces, yields a macroscopic continuous characterization of the global deformation modes of origami tessellations and other similar periodic pin-jointed trusses. The outcome of the method is a set of nonlinear differential equations governing the parametrization, metric and curvature of surfaces that the initially discrete structure can fit. The theory is presented through a case study of a fairly generic example: the eggbox pattern. The proposed continuous model predicts correctly the existence of various fittings that are subsequently constructed and illustrated.

  15. Curvature, metric and parametrization of origami tessellations: theory and application to the eggbox pattern

    NASA Astrophysics Data System (ADS)

    Nassar, H.; Lebée, A.; Monasse, L.

    2017-01-01

    Origami tessellations are particular textured morphing shell structures. Their unique folding and unfolding mechanisms on a local scale aggregate and bring on large changes in shape, curvature and elongation on a global scale. The existence of these global deformation modes allows for origami tessellations to fit non-trivial surfaces thus inspiring applications across a wide range of domains including structural engineering, architectural design and aerospace engineering. The present paper suggests a homogenization-type two-scale asymptotic method which, combined with standard tools from differential geometry of surfaces, yields a macroscopic continuous characterization of the global deformation modes of origami tessellations and other similar periodic pin-jointed trusses. The outcome of the method is a set of nonlinear differential equations governing the parametrization, metric and curvature of surfaces that the initially discrete structure can fit. The theory is presented through a case study of a fairly generic example: the eggbox pattern. The proposed continuous model predicts correctly the existence of various fittings that are subsequently constructed and illustrated.

  16. Organized DFM

    NASA Astrophysics Data System (ADS)

    Sato, Takashi; Honma, Michio; Itoh, Hiroyuki; Iriki, Nobuyuki; Kobayashi, Sachiko; Miyazaki, Norihiko; Onodera, Toshio; Suzuki, Hiroyuki; Yoshioka, Nobuyuki; Arima, Sumika; Kadota, Kazuya

    2009-04-01

    The category and objective of DFM production management are shown. DFM is not limited to an activity within a particular unit process in design and process. A new framework for DFM is required. DFM should be a total solution for the common problems of all processes. Each of them must be linked to one another organically. After passing through the whole of each process on the manufacturing platform, quality of final products is guaranteed and products are shipped to the market. The information platform is layered with DFM, APC, and AEC. Advanced DFM is not DFM for partial optimization of the lithography process and the design, etc. and it should be Organized DFM. They are managed with high-level organizational IQ. The interim quality between each step of the flow should be visualized. DFM will be quality engineering if it is Organized DFM and common metrics of the quality are provided. DFM becomes quality engineering through effective implementation of common industrial metrics and standardized technology. DFM is differential technology, but can leverage standards for efficient development.

  17. Performation Metrics Development Analysis for Information and Communications Technology Outsourcing: A Case Study

    ERIC Educational Resources Information Center

    Travis, James L., III

    2014-01-01

    This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…

  18. Engine-Out Capabilities Assessment of Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Holladay, Jon; Baggett, Keithe; Thrasher, Chad; Bellamy, K. Scott; Feldman, Stuart

    2012-01-01

    Engine-out (EO) is a condition that might occur during flight due to the failure of one or more engines. Protection against this occurrence can be called engine-out capability (EOC) whereupon significantly improved loss of mission may occur, in addition to reduction in performance and increased cost. A standardized engine-out capability has not been studied exhaustively as it pertains to space launch systems. This work presents results for a specific vehicle design with specific engines, but also uniquely provides an approach to realizing the necessity of EOC for any launch vehicle system design. A derived top-level approach to engine-out philosophy for a heavy lift launch vehicle is given herein, based on an historical assessment of launch vehicle capabilities. The methodology itself is not intended to present a best path forward, but instead provides three parameters for assessment of a particular vehicle. Of the several parameters affected by this EOC, the three parameters of interest in this research are reliability (Loss of Mission (LOM) and Loss of Crew (LOC)), vehicle performance, and cost. The intent of this effort is to provide insight into the impacts of EO capability on these parameters. The effects of EOC on reliability, performance and cost are detailed, including how these important launch vehicle metrics can be combined to assess what could be considered overall launch vehicle affordability. In support of achieving the first critical milestone (Mission Concept Review) in the development of the Space Launch System (SLS), a team assessed two-stage, large-diameter vehicles that utilized liquid oxygen (LOX)-RP propellants in the First Stage and LOX/LH2 propellant in the Upper Stage. With multiple large thrust-class engines employed on the stages, engine-out capability could be a significant driver to mission success. It was determined that LOM results improve by a factor of five when assuming EOC for both Core Stage (CS) (first stage) and Upper Stage (US) EO, assuming a reference launch vehicle with 5 RP engines on the CS and 3 LOX/LH2 engines on the US. The benefit of adding both CS and US engine-out capability is significant. When adding EOC for either first or second stages, there is less than a 20% benefit. Performance analysis has shown that if the vehicle is not protected for EO during the first part of the flight and only protected in the later part of the flight, there is a diminishing performance penalty, as indicated by failures occurring in the first stage at different times. This work did not consider any options to abort. While adding an engine for EOC drives cost upward, the impact depends on the number of needed engines manufactured per year and the launch manifest. There is a significant cost savings if multiple flights occur within one year. Flying two flights per year would cost approximately $4,000 per pound less than the same configuration with one flight per year, assuming both CS and US EOC. The cost is within 15% of the cost of one flight per year with no engine-out capability for the same vehicle. This study can be extended to other launch vehicles. While the numbers given in this paper are specific to a certain vehicle configuration, the process requires only a high level of data to allow an analyst to draw conclusions. The weighting of each of the identified parameters will determine the optimization of each launch vehicle. The results of this engine-out assessment provide a means to understand this optimization while maintaining an unbiased perspective.

  19. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  20. Highway safety performance metrics and emergency response in an advanced transportation environment : final report.

    DOT National Transportation Integrated Search

    2016-06-01

    Traditional highway safety performance metrics have been largely based on fatal crashes and more recently serious injury crashes. In the near future however, there may be less severe motor vehicle crashes due to advances in driver assistance systems,...

  1. Optimization of planar self-collimating photonic crystals.

    PubMed

    Rumpf, Raymond C; Pazos, Javier J

    2013-07-01

    Self-collimation in photonic crystals has received a lot of attention in the literature, partly due to recent interest in silicon photonics, yet no performance metrics have been proposed. This paper proposes a figure of merit (FOM) for self-collimation and outlines a methodical approach for calculating it. Performance metrics include bandwidth, angular acceptance, strength, and an overall FOM. Two key contributions of this work include the performance metrics and identifying that the optimum frequency for self-collimation is not at the inflection point. The FOM is used to optimize a planar photonic crystal composed of a square array of cylinders. Conclusions are drawn about how the refractive indices and fill fraction of the lattice impact each of the performance metrics. The optimization is demonstrated by simulating two spatially variant self-collimating photonic crystals, where one has a high FOM and the other has a low FOM. This work gives optical designers tremendous insight into how to design and optimize robust self-collimating photonic crystals, which promises many applications in silicon photonics and integrated optics.

  2. Solar Electric Propulsion Vehicle Design Study for Cargo Transfer to Earth-moon L1

    NASA Technical Reports Server (NTRS)

    Sarver-Verhey, Timothy R.; Kerslake, Thomas W.; Rawlin, Vincent K.; Falck, Robert D.; Dudzinski, Leonard J.; Oleson, Steven R.

    2002-01-01

    A design study for a cargo transfer vehicle using solar electric propulsion was performed for NASA's Revolutionary Aerospace Systems Concepts program. Targeted for 2016, the solar electric propulsion (SEP) transfer vehicle is required to deliver a propellant supply module with a mass of approximately 36 metric tons from Low Earth Orbit to the first Earth-Moon libration point (LL1) within 270 days. Following an examination of propulsion and power technology options, a SEP transfer vehicle design was selected that incorporated large-area (approx. 2700 sq m) thin film solar arrays and a clustered engine configuration of eight 50 kW gridded ion thrusters mounted on an articulated boom. Refinement of the SEP vehicle design was performed iteratively to properly estimate the required xenon propellant load for the out-bound orbit transfer. The SEP vehicle performance, including the xenon propellant estimation, was verified via the SNAP trajectory code. Further efforts are underway to extend this system model to other orbit transfer missions.

  3. Performance evaluation of objective quality metrics for HDR image compression

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  4. Traveler oriented traffic performance metrics using real time traffic data from the Midtown-in-Motion (MIM) project in Manhattan, NY.

    DOT National Transportation Integrated Search

    2013-10-01

    In a congested urban street network the average traffic speed is an inadequate metric for measuring : speed changes that drivers can perceive from changes in traffic control strategies. : A driver oriented metric is needed. Stop frequency distrib...

  5. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  6. Cloud computing approaches for prediction of ligand binding poses and pathways.

    PubMed

    Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S

    2015-01-22

    We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design.

  7. Vibration control in smart coupled beams subjected to pulse excitations

    NASA Astrophysics Data System (ADS)

    Pisarski, Dominik; Bajer, Czesław I.; Dyniewicz, Bartłomiej; Bajkowski, Jacek M.

    2016-10-01

    In this paper, a control method to stabilize the vibration of adjacent structures is presented. The control is realized by changes of the stiffness parameters of the structure's couplers. A pulse excitation applied to the coupled adjacent beams is imposed as the kinematic excitation. For such a representation, the designed control law provides the best rate of energy dissipation. By means of a stability analysis, the performance in different structural settings is studied. The efficiency of the proposed strategy is examined via numerical simulations. In terms of the assumed energy metric, the controlled structure outperforms its passively damped equivalent by over 50 percent. The functionality of the proposed control strategy should attract the attention of practising engineers who seek solutions to upgrade existing damping systems.

  8. Supporting the analysis of ontology evolution processes through the combination of static and dynamic scaling functions in OQuaRE.

    PubMed

    Duque-Ramos, Astrid; Quesada-Martínez, Manuel; Iniesta-Moreno, Miguela; Fernández-Breis, Jesualdo Tomás; Stevens, Robert

    2016-10-17

    The biomedical community has now developed a significant number of ontologies. The curation of biomedical ontologies is a complex task and biomedical ontologies evolve rapidly, so new versions are regularly and frequently published in ontology repositories. This has the implication of there being a high number of ontology versions over a short time span. Given this level of activity, ontology designers need to be supported in the effective management of the evolution of biomedical ontologies as the different changes may affect the engineering and quality of the ontology. This is why there is a need for methods that contribute to the analysis of the effects of changes and evolution of ontologies. In this paper we approach this issue from the ontology quality perspective. In previous work we have developed an ontology evaluation framework based on quantitative metrics, called OQuaRE. Here, OQuaRE is used as a core component in a method that enables the analysis of the different versions of biomedical ontologies using the quality dimensions included in OQuaRE. Moreover, we describe and use two scales for evaluating the changes between the versions of a given ontology. The first one is the static scale used in OQuaRE and the second one is a new, dynamic scale, based on the observed values of the quality metrics of a corpus defined by all the versions of a given ontology (life-cycle). In this work we explain how OQuaRE can be adapted for understanding the evolution of ontologies. Its use has been illustrated with the ontology of bioinformatics operations, types of data, formats, and topics (EDAM). The two scales included in OQuaRE provide complementary information about the evolution of the ontologies. The application of the static scale, which is the original OQuaRE scale, to the versions of the EDAM ontology reveals a design based on good ontological engineering principles. The application of the dynamic scale has enabled a more detailed analysis of the evolution of the ontology, measured through differences between versions. The statistics of change based on the OQuaRE quality scores make possible to identify key versions where some changes in the engineering of the ontology triggered a change from the OQuaRE quality perspective. In the case of the EDAM, this study let us to identify that the fifth version of the ontology has the largest impact in the quality metrics of the ontology, when comparative analyses between the pairs of consecutive versions are performed.

  9. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty

    PubMed Central

    Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance. PMID:27152838

  10. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.

    PubMed

    Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance.

  11. Silicon production process evaluations

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Chemical engineering analyses involving the preliminary process design of a plant (1,000 metric tons/year capacity) to produce silicon via the technology under consideration were accomplished. Major activities in the chemical engineering analyses included base case conditions, reaction chemistry, process flowsheet, material balance, energy balance, property data, equipment design, major equipment list, production labor and forward for economic analysis. The process design package provided detailed data for raw materials, utilities, major process equipment and production labor requirements necessary for polysilicon production in each process.

  12. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  13. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  14. Constrained Metric Learning by Permutation Inducing Isometries.

    PubMed

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  15. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  16. Determination of a Screening Metric for High Diversity DNA Libraries.

    PubMed

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  17. Methodology to Calculate the ACE and HPQ Metrics Used in the Wave Energy Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R; Weber, Jochem W; Jenne, Dale S

    The U.S. Department of Energy's Wave Energy Prize Competition encouraged the development of innovative deep-water wave energy conversion technologies that at least doubled device performance above the 2014 state of the art. Because levelized cost of energy (LCOE) metrics are challenging to apply equitably to new technologies where significant uncertainty exists in design and operation, the prize technical team developed a reduced metric as proxy for LCOE, which provides an equitable comparison of low technology readiness level wave energy converter (WEC) concepts. The metric is called 'ACE' which is short for the ratio of the average climate capture width tomore » the characteristic capital expenditure. The methodology and application of the ACE metric used to evaluate the performance of the technologies that competed in the Wave Energy Prize are explained in this report.« less

  18. Bootstrapping Process Improvement Metrics: CMMI Level 4 Process Improvement Metrics in a Level 3 World

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Lewicki, Scott; Morgan, Scott

    2011-01-01

    The measurement techniques for organizations which have achieved the Software Engineering Institutes CMMI Maturity Levels 4 and 5 are well documented. On the other hand, how to effectively measure when an organization is Maturity Level 3 is less well understood, especially when there is no consistency in tool use and there is extensive tailoring of the organizational software processes. Most organizations fail in their attempts to generate, collect, and analyze standard process improvement metrics under these conditions. But at JPL, NASA's prime center for deep space robotic exploration, we have a long history of proving there is always a solution: It just may not be what you expected. In this paper we describe the wide variety of qualitative and quantitative techniques we have been implementing over the last few years, including the various approaches used to communicate the results to both software technical managers and senior managers.

  19. What are the Ingredients of a Scientifically and Policy-Relevant Hydrologic Connectivity Metric?

    NASA Astrophysics Data System (ADS)

    Ali, G.; English, C.; McCullough, G.; Stainton, M.

    2014-12-01

    While the concept of hydrologic connectivity is of significant importance to both researchers and policy makers, there is no consensus on how to express it in quantitative terms. This lack of consensus was further exacerbated by recent rulings of the U.S. Supreme Court that rely on the idea of "significant nexuses": critical degrees of landscape connectivity now have to be demonstrated to warrant environmental protection under the Clean Water Act. Several indicators of connectivity have been suggested in the literature, but they are often computationally intensive and require soil water content information, a requirement that makes them inapplicable over large, data-poor areas for which management decisions are needed. Here our objective was to assess the extent to which the concept of connectivity could become more operational by: 1) drafting a list of potential, watershed-scale connectivity metrics; 2) establishing a list of criteria for ranking the performance of those metrics; 3) testing them in various landscapes. Our focus was on a dozen agricultural Prairie watersheds where the interaction between near-level topography, perennial and intermittent streams, pothole wetlands and man-made drains renders the estimation of connectivity difficult. A simple procedure was used to convert RADARSAT images, collected between 1997 and 2011, into binary maps of saturated versus non-saturated areas. Several pattern-based and graph-theoretic metrics were then computed for a dynamic assessment of connectivity. The metrics performance was compared with regards to their sensitivity to antecedent precipitation, their correlation with watershed discharge, and their ability to portray aggregation effects. Results show that no single connectivity metric could satisfy all our performance criteria. Graph-theoretic metrics however seemed to perform better in pothole-dominated watersheds, thus highlighting the need for region-specific connectivity assessment frameworks.

  20. A general theory of multimetric indices and their properties

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William

    2012-01-01

    1. Stewardship of biological and ecological resources requires the ability to make integrative assessments of ecological integrity. One of the emerging methods for making such integrative assessments is multimetric indices (MMIs). These indices synthesize data, often from multiple levels of biological organization, with the goal of deriving a single index that reflects the overall effects of human disturbance. Despite the widespread use of MMIs, there is uncertainty about why this approach can be effective. An understanding of MMIs requires a quantitative theory that illustrates how the properties of candidate metrics relates to MMIs generated from those metrics. 2. We present the initial basis for such a theory by deriving the general mathematical characteristics of MMIs assembled from metrics. We then use the theory to derive quantitative answers to the following questions: Is there an optimal number of metrics to comprise an index? How does covariance among metrics affect the performance of the index derived from those metrics? And what are the criteria to decide whether a given metric will improve the performance of an index? 3. We find that the optimal number of metrics to be included in an index depends on the theoretical distribution of signal of the disturbance gradient contained in each metric. For example, if the rank-ordered parameters of a metric-disturbance regression can be described by a monotonically decreasing function, then an optimum number of metrics exists and can often be derived analytically. We derive the conditions by which adding a given metric can be expected to improve an index. 4. We find that the criterion defining such conditions depends nonlinearly of the signal of the disturbance gradient, the noise (error) of the metric and the correlation of the metric errors. Importantly, we find that correlation among metric errors increases the signal required for the metric to improve the index. 5. The theoretical framework presented in this study provides the basis for understanding the properties of MMIs. It can also be useful throughout the index construction process. Specifically, it can be used to aid understanding of the benefits and limitations of combining metrics into indices; it can inform selection/collection of candidate metrics; and it can be used directly as a decision aid in effective index construction.

  1. A Combination of Traditional and Novel Methods Used to Evaluate the Impact of an EVA Glove on Hand Performance

    NASA Technical Reports Server (NTRS)

    Rajulu, Sudhakar; Benson, Elizabeth; England, Scott; Mesloh, Miranda; Thompson, Shelby

    2009-01-01

    The gloved hand is an astronaut s primary means of interacting with the environment, so performance on an EVA is strongly impacted by any restrictions imposed by the glove. As a result, these restrictions have been the subject of study for decades. However, previous studies have generally been unsuccessful in quantifying glove mobility and tactility. Instead, studies have tended to focus on the dexterity, strength and functional performance of the gloved hand. Therefore, it has been difficult to judge the impact of each type of restriction on the glove s overall capability. The lack of basic information on glove mobility in particular, is related to the difficulty in instrumenting a gloved hand to allow an accurate evaluation. However, the current study aims at developing novel technological capabilities to provide metrics for mobility and tactility that can be used to assess the performance of a glove in a way that could enable designers and engineers to improve upon their current designs. A series of evaluations were performed in ungloved, unpressurized and pressurized (4.3 psi) conditions, to allow a comparison across pressures and to the baseline barehanded condition. In addition, a subset of the testing was also performed with the Thermal Micrometeoroid Garment (TMG) removed. This test case in particular provided some interesting insight into how much of an impact the TMG has on gloved mobility -- in some cases, as much as pressurization of the glove. Previous rule-of-thumb estimates had assumed that the TMG would have a much lower impact on mobility, while these results suggest that an improvement in the TMG could actually have a significant impact on glove performance. Similarly, tactility testing illustrated the impact of glove pressurization on tactility and provided insight on the design of interfaces to the glove. The metrics described in this paper have been used to benchmark the Phase VI EVA glove and to develop requirements for the next generation glove for the Constellation program.

  2. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    PubMed

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  3. Changing Metrics of Organ Procurement Organization Performance in Order to Increase Organ Donation Rates in the United States.

    PubMed

    Goldberg, D; Kallan, M J; Fu, L; Ciccarone, M; Ramirez, J; Rosenberg, P; Arnold, J; Segal, G; Moritsugu, K P; Nathan, H; Hasz, R; Abt, P L

    2017-12-01

    The shortage of deceased-donor organs is compounded by donation metrics that fail to account for the total pool of possible donors, leading to ambiguous donor statistics. We sought to assess potential metrics of organ procurement organizations (OPOs) utilizing data from the Nationwide Inpatient Sample (NIS) from 2009-2012 and State Inpatient Databases (SIDs) from 2008-2014. A possible donor was defined as a ventilated inpatient death ≤75 years of age, without multi-organ system failure, sepsis, or cancer, whose cause of death was consistent with organ donation. These estimates were compared to patient-level data from chart review from two large OPOs. Among 2,907,658 inpatient deaths from 2009-2012, 96,028 (3.3%) were a "possible deceased-organ donor." The two proposed metrics of OPO performance were: (1) donation percentage (percentage of possible deceased-donors who become actual donors; range: 20.0-57.0%); and (2) organs transplanted per possible donor (range: 0.52-1.74). These metrics allow for comparisons of OPO performance and geographic-level donation rates, and identify areas in greatest need of interventions to improve donation rates. We demonstrate that administrative data can be used to identify possible deceased donors in the US and could be a data source for CMS to implement new OPO performance metrics in a standardized fashion. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  4. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  5. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    NASA Technical Reports Server (NTRS)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  6. An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN

    PubMed Central

    Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379

  7. An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.

    PubMed

    Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.

  8. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  9. Assessment of Performance Measures for Security of the Maritime Transportation Network, Port Security Metrics : Proposed Measurement of Deterrence Capability

    DOT National Transportation Integrated Search

    2007-01-03

    This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...

  10. 75 FR 14588 - Proposed Agency Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-26

    ... progress, jobs created and retained, spend rates and performance metrics under the American Recovery and... information that DOE is developing to collect data on the status of activities, project progress, jobs created and retained, spend rates and performance metrics under the American Recovery and Reinvestment Act of...

  11. Stronger by Degrees: 2012-13 Accountability Report

    ERIC Educational Resources Information Center

    Kentucky Council on Postsecondary Education, 2014

    2014-01-01

    The annual "Accountability Report" produced by the Council on Postsecondary Education highlights the system's performance on the state-level metrics included in "Stronger by Degrees: A Strategic Agenda for Kentucky Postsecondary and Adult Education." For each metric, we outline steps taken to improve performance, as well as…

  12. Motorcycle Mechanic. Teacher Edition.

    ERIC Educational Resources Information Center

    Baugus, Mickey; Fulkerson, Dan, Ed.

    These teacher's materials are for a 19-unit competency-based course on entry-level motorcycle mechanics at the secondary and postsecondary levels. The 19 units are: (1) introduction to motorcycle repair; (2) general safety; (3) tools and equipment; (4) metric measurements; (5) fasteners; (6) service department operations; (7) motorcycle engines;…

  13. Atomic Cluster Ionization and Attosecond Generation at Long Wavelengths

    DTIC Science & Technology

    2015-10-31

    fellowships for further studies in science, mathematics, engineering or technology fields: Student Metrics This section only applies to graduating...order to investigate this we use a modified Lewenstein quantum model in which the cluster is represented by a 1D Coulomb potential for the parent ion

  14. The balanced scorecard: sustainable performance assessment for forensic laboratories.

    PubMed

    Houck, Max; Speaker, Paul J; Fleming, Arron Scott; Riley, Richard A

    2012-12-01

    The purpose of this article is to introduce the concept of the balanced scorecard into the laboratory management environment. The balanced scorecard is a performance measurement matrix designed to capture financial and non-financial metrics that provide insight into the critical success factors for an organization, effectively aligning organization strategy to key performance objectives. The scorecard helps organizational leaders by providing balance from two perspectives. First, it ensures an appropriate mix of performance metrics from across the organization to achieve operational excellence; thereby the balanced scorecard ensures that no single or limited group of metrics dominates the assessment process, possibly leading to long-term inferior performance. Second, the balanced scorecard helps leaders offset short term performance pressures by giving recognition and weight to long-term laboratory needs that, if not properly addressed, might jeopardize future laboratory performance. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Development of an Expanded, High Reliability Cost and Performance Database for In Situ Remediation Technologies

    DTIC Science & Technology

    2016-03-01

    Performance Metrics University of Waterloo Permanganate Treatment of an Emplaced DNAPL Source (Thomson et al., 2007) Table 5.6 Remediation Performance Data... permanganate vs. peroxide/Fenton’s for chemical oxidation).  Poorer performance was generally observed when the Total CVOC was the contaminant metric...using a soluble carbon substrate (lactate), chemical oxidation using Fenton’s reagent, and chemical oxidation using potassium permanganate . At

  16. A novel ECG detector performance metric and its relationship with missing and false heart rate limit alarms.

    PubMed

    Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G

    Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.

  17. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    PubMed

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)

  18. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory.

    PubMed

    Kumar, B Vinodh; Mohan, Thuthi

    2018-01-01

    Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.

  19. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  20. Evaluation of powertrain solutions for future tactical truck vehicle systems

    NASA Astrophysics Data System (ADS)

    Pisu, Pierluigi; Cantemir, Codrin-Gruie; Dembski, Nicholas; Rizzoni, Giorgio; Serrao, Lorenzo; Josephson, John R.; Russell, James

    2006-05-01

    The article presents the results of a large scale design space exploration for the hybridization of two off-road vehicles, part of the Future Tactical Truck System (FTTS) family: Maneuver Sustainment Vehicle (MSV) and Utility Vehicle (UV). Series hybrid architectures are examined. The objective of the paper is to illustrate a novel design methodology that allows for the choice of the optimal values of several vehicle parameters. The methodology consists in an extensive design space exploration, which involves running a large number of computer simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles, and multiple attributes of performance are measured. The resulting designs are filtered to choose the design tradeoffs that better satisfy the performance and fuel economy requirements. At the end, few promising vehicle configuration designs will be selected that will need additional detailed investigation including neglected metrics like ride and drivability. Several powertrain architectures have been simulated. The design parameters include the number of axles in the vehicle (2 or 3), the number of electric motors per axle (1 or 2), the type of internal combustion engine, the type and quantity of energy storage system devices (batteries, electrochemical capacitors or both together). An energy management control strategy has also been developed to provide efficiency and performance. The control parameters are tunable and have been included into the design space exploration. The results show that the internal combustion engine and the energy storage system devices are extremely important for the vehicle performance.

  1. Dose-volume metrics and their relation to memory performance in pediatric brain tumor patients: A preliminary study.

    PubMed

    Raghubar, Kimberly P; Lamba, Michael; Cecil, Kim M; Yeates, Keith Owen; Mahone, E Mark; Limke, Christina; Grosshans, David; Beckwith, Travis J; Ris, M Douglas

    2018-06-01

    Advances in radiation treatment (RT), specifically volumetric planning with detailed dose and volumetric data for specific brain structures, have provided new opportunities to study neurobehavioral outcomes of RT in children treated for brain tumor. The present study examined the relationship between biophysical and physical dose metrics and neurocognitive ability, namely learning and memory, 2 years post-RT in pediatric brain tumor patients. The sample consisted of 26 pediatric patients with brain tumor, 14 of whom completed neuropsychological evaluations on average 24 months post-RT. Prescribed dose and dose-volume metrics for specific brain regions were calculated including physical metrics (i.e., mean dose and maximum dose) and biophysical metrics (i.e., integral biological effective dose and generalized equivalent uniform dose). We examined the associations between dose-volume metrics (whole brain, right and left hippocampus), and performance on measures of learning and memory (Children's Memory Scale). Biophysical dose metrics were highly correlated with the physical metric of mean dose but not with prescribed dose. Biophysical metrics and mean dose, but not prescribed dose, correlated with measures of learning and memory. These preliminary findings call into question the value of prescribed dose for characterizing treatment intensity; they also suggest that biophysical dose has only a limited advantage compared to physical dose when calculated for specific regions of the brain. We discuss the implications of the findings for evaluating and understanding the relation between RT and neurocognitive functioning. © 2018 Wiley Periodicals, Inc.

  2. Diagram of the Saturn V Launch Vehicle in Metric

    NASA Technical Reports Server (NTRS)

    1971-01-01

    This is a good cutaway diagram of the Saturn V launch vehicle showing the three stages, the instrument unit, and the Apollo spacecraft. The chart on the right presents the basic technical data in clear metric detail. The Saturn V is the largest and most powerful launch vehicle in the United States. The towering, 111 meter, Saturn V was a multistage, multiengine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams. Development of the Saturn V was the responsibility of the Marshall Space Flight Center at Huntsville, Alabama, directed by Dr. Wernher von Braun.

  3. Library Economic Metrics: Examples of the Comparison of Electronic and Print Journal Collections and Collection Services.

    ERIC Educational Resources Information Center

    King, Donald W.; Boyce, Peter B.; Montgomery, Carol Hansen; Tenopir, Carol

    2003-01-01

    Focuses on library economic metrics, and presents a conceptual framework for library economic metrics including service input and output, performance, usage, effectiveness, outcomes, impact, and cost and benefit comparisons. Gives examples of these measures for comparison of library electronic and print collections and collection services.…

  4. Synchronization of multi-agent systems with metric-topological interactions.

    PubMed

    Wang, Lin; Chen, Guanrong

    2016-09-01

    A hybrid multi-agent systems model integrating the advantages of both metric interaction and topological interaction rules, called the metric-topological model, is developed. This model describes planar motions of mobile agents, where each agent can interact with all the agents within a circle of a constant radius, and can furthermore interact with some distant agents to reach a pre-assigned number of neighbors, if needed. Some sufficient conditions imposed only on system parameters and agent initial states are presented, which ensure achieving synchronization of the whole group of agents. It reveals the intrinsic relationships among the interaction range, the speed, the initial heading, and the density of the group. Moreover, robustness against variations of interaction range, density, and speed are investigated by comparing the motion patterns and performances of the hybrid metric-topological interaction model with the conventional metric-only and topological-only interaction models. Practically in all cases, the hybrid metric-topological interaction model has the best performance in the sense of achieving highest frequency of synchronization, fastest convergent rate, and smallest heading difference.

  5. 77 FR 12832 - Non-RTO/ISO Performance Metrics; Commission Staff Request Comments on Performance Metrics for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ...- peak hours; and (4) additional information on equipment types affected and kV of lines affected. Items... Regulatory Commission (Commission or FERC), among other actions, work with regional transmission... Average burden FERC-922 requirements respondents responses per hours per Total annual annually respondent...

  6. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  7. The Consequences of Using One Assessment System to Pursue Two Objectives

    ERIC Educational Resources Information Center

    Neal, Derek

    2013-01-01

    Education officials often use one assessment system both to create measures of student achievement and to create performance metrics for educators. However, modern standardized testing systems are not designed to produce performance metrics for teachers or principals. They are designed to produce reliable measures of individual student achievement…

  8. Algal bioassessment metrics for wadeable streams and rivers of Maine, USA

    USGS Publications Warehouse

    Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth

    2011-01-01

    Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.

  9. The use of the general image quality equation in the design and evaluation of imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Steve A.; Florio, Christopher J.; Duvall, David J.; Leon, Michael A.

    2009-08-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. The National Imagery Interpretability Rating Scale (NIIRS) is a useful measure of image quality, because, by characterizing the overall interpretability of an image, it combines into one metric those contributors to image quality to which a human interpreter is most sensitive. The main drawback to using a NIIRS rating as a measure of image quality in engineering trade studies is the fact that it is tied to the human observer and cannot be predicted from physical principles and engineering parameters alone. The General Image Quality Equation (GIQE) of Leachtenauer et al. 1997 [Appl. Opt. 36, 8322-8328 (1997)] is a regression of actual image analyst NIIRS ratings vs. readily calculable engineering metrics, and provides a mechanism for using the expected NIIRS rating of an imaging system in the design and evaluation process. In this paper, we will discuss how we use the GIQE in conjunction with The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) to evaluate imager designs, taking a hypothetical high resolution commercial imaging system as an example.

  10. Early Warning Look Ahead Metrics: The Percent Milestone Backlog Metric

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Anderson, Timothy P.

    2017-01-01

    All complex development projects experience delays and corresponding backlogs of their project control milestones during their acquisition lifecycles. NASA Goddard Space Flight Center (GSFC) Flight Projects Directorate (FPD) teamed with The Aerospace Corporation (Aerospace) to develop a collection of Early Warning Look Ahead metrics that would provide GSFC leadership with some independent indication of the programmatic health of GSFC flight projects. As part of the collection of Early Warning Look Ahead metrics, the Percent Milestone Backlog metric is particularly revealing, and has utility as a stand-alone execution performance monitoring tool. This paper describes the purpose, development methodology, and utility of the Percent Milestone Backlog metric. The other four Early Warning Look Ahead metrics are also briefly discussed. Finally, an example of the use of the Percent Milestone Backlog metric in providing actionable insight is described, along with examples of its potential use in other commodities.

  11. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  12. DHLAS: A web-based information system for statistical genetic analysis of HLA population data.

    PubMed

    Thriskos, P; Zintzaras, E; Germenis, A

    2007-03-01

    DHLAS (database HLA system) is a user-friendly, web-based information system for the analysis of human leukocyte antigens (HLA) data from population studies. DHLAS has been developed using JAVA and the R system, it runs on a Java Virtual Machine and its user-interface is web-based powered by the servlet engine TOMCAT. It utilizes STRUTS, a Model-View-Controller framework and uses several GNU packages to perform several of its tasks. The database engine it relies upon for fast access is MySQL, but others can be used a well. The system estimates metrics, performs statistical testing and produces graphs required for HLA population studies: (i) Hardy-Weinberg equilibrium (calculated using both asymptotic and exact tests), (ii) genetics distances (Euclidian or Nei), (iii) phylogenetic trees using the unweighted pair group method with averages and neigbor-joining method, (iv) linkage disequilibrium (pairwise and overall, including variance estimations), (v) haplotype frequencies (estimate using the expectation-maximization algorithm) and (vi) discriminant analysis. The main merit of DHLAS is the incorporation of a database, thus, the data can be stored and manipulated along with integrated genetic data analysis procedures. In addition, it has an open architecture allowing the inclusion of other functions and procedures.

  13. Quantifying losses and thermodynamic limits in nanophotonic solar cells

    NASA Astrophysics Data System (ADS)

    Mann, Sander A.; Oener, Sebastian Z.; Cavalli, Alessandro; Haverkort, Jos E. M.; Bakkers, Erik P. A. M.; Garnett, Erik C.

    2016-12-01

    Nanophotonic engineering shows great potential for photovoltaics: the record conversion efficiencies of nanowire solar cells are increasing rapidly and the record open-circuit voltages are becoming comparable to the records for planar equivalents. Furthermore, it has been suggested that certain nanophotonic effects can reduce costs and increase efficiencies with respect to planar solar cells. These effects are particularly pronounced in single-nanowire devices, where two out of the three dimensions are subwavelength. Single-nanowire devices thus provide an ideal platform to study how nanophotonics affects photovoltaics. However, for these devices the standard definition of power conversion efficiency no longer applies, because the nanowire can absorb light from an area much larger than its own size. Additionally, the thermodynamic limit on the photovoltage is unknown a priori and may be very different from that of a planar solar cell. This complicates the characterization and optimization of these devices. Here, we analyse an InP single-nanowire solar cell using intrinsic metrics to place its performance on an absolute thermodynamic scale and pinpoint performance loss mechanisms. To determine these metrics we have developed an integrating sphere microscopy set-up that enables simultaneous and spatially resolved quantitative absorption, internal quantum efficiency (IQE) and photoluminescence quantum yield (PLQY) measurements. For our record single-nanowire solar cell, we measure a photocurrent collection efficiency of >90% and an open-circuit voltage of 850 mV, which is 73% of the thermodynamic limit (1.16 V).

  14. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    PubMed

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Dynamic allocation of attention to metrical and grouping accents in rhythmic sequences.

    PubMed

    Kung, Shu-Jen; Tzeng, Ovid J L; Hung, Daisy L; Wu, Denise H

    2011-04-01

    Most people find it easy to perform rhythmic movements in synchrony with music, which reflects their ability to perceive the temporal periodicity and to allocate attention in time accordingly. Musicians and non-musicians were tested in a click localization paradigm in order to investigate how grouping and metrical accents in metrical rhythms influence attention allocation, and to reveal the effect of musical expertise on such processing. We performed two experiments in which the participants were required to listen to isochronous metrical rhythms containing superimposed clicks and then to localize the click on graphical and ruler-like representations with and without grouping structure information, respectively. Both experiments revealed metrical and grouping influences on click localization. Musical expertise improved the precision of click localization, especially when the click coincided with a metrically strong beat. Critically, although all participants located the click accurately at the beginning of an intensity group, only musicians located it precisely when it coincided with a strong beat at the end of the group. Removal of the visual cue of grouping structures enhanced these effects in musicians and reduced them in non-musicians. These results indicate that musical expertise not only enhances attention to metrical accents but also heightens sensitivity to perceptual grouping.

  16. Resilience-based performance metrics for water resources management under uncertainty

    NASA Astrophysics Data System (ADS)

    Roach, Tom; Kapelan, Zoran; Ledbetter, Ralph

    2018-06-01

    This paper aims to develop new, resilience type metrics for long-term water resources management under uncertain climate change and population growth. Resilience is defined here as the ability of a water resources management system to 'bounce back', i.e. absorb and then recover from a water deficit event, restoring the normal system operation. Ten alternative metrics are proposed and analysed addressing a range of different resilience aspects including duration, magnitude, frequency and volume of related water deficit events. The metrics were analysed on a real-world case study of the Bristol Water supply system in the UK and compared with current practice. The analyses included an examination of metrics' sensitivity and correlation, as well as a detailed examination into the behaviour of metrics during water deficit periods. The results obtained suggest that multiple metrics which cover different aspects of resilience should be used simultaneously when assessing the resilience of a water resources management system, leading to a more complete understanding of resilience compared with current practice approaches. It was also observed that calculating the total duration of a water deficit period provided a clearer and more consistent indication of system performance compared to splitting the deficit periods into the time to reach and time to recover from the worst deficit events.

  17. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  18. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  19. White Matter Microstructural Abnormalities in Type 2 Diabetes Mellitus: A Diffusional Kurtosis Imaging Analysis.

    PubMed

    Xie, Y; Zhang, Y; Qin, W; Lu, S; Ni, C; Zhang, Q

    2017-03-01

    Increasing DTI studies have demonstrated that white matter microstructural abnormalities play an important role in type 2 diabetes mellitus-related cognitive impairment. In this study, the diffusional kurtosis imaging method was used to investigate WM microstructural alterations in patients with type 2 diabetes mellitus and to detect associations between diffusional kurtosis imaging metrics and clinical/cognitive measurements. Diffusional kurtosis imaging and cognitive assessments were performed on 58 patients with type 2 diabetes mellitus and 58 controls. Voxel-based intergroup comparisons of diffusional kurtosis imaging metrics were conducted, and ROI-based intergroup comparisons were further performed. Correlations between the diffusional kurtosis imaging metrics and cognitive/clinical measurements were assessed after controlling for age, sex, and education in both patients and controls. Altered diffusion metrics were observed in the corpus callosum, the bilateral frontal WM, the right superior temporal WM, the left external capsule, and the pons in patients with type 2 diabetes mellitus compared with controls. The splenium of the corpus callosum and the pons had abnormal kurtosis metrics in patients with type 2 diabetes mellitus. Additionally, altered diffusion metrics in the right prefrontal WM were significantly correlated with disease duration and attention task performance in patients with type 2 diabetes mellitus. With both conventional diffusion and additional kurtosis metrics, diffusional kurtosis imaging can provide additional information on WM microstructural abnormalities in patients with type 2 diabetes mellitus. Our results indicate that WM microstructural abnormalities occur before cognitive decline and may be used as neuroimaging markers for predicting the early cognitive impairment in patients with type 2 diabetes mellitus. © 2017 by American Journal of Neuroradiology.

  20. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  1. Interactive Mapping of Inundation Metrics Using Cloud Computing for Improved Floodplain Conservation and Management

    NASA Astrophysics Data System (ADS)

    Bulliner, E. A., IV; Lindner, G. A.; Bouska, K.; Paukert, C.; Jacobson, R. B.

    2017-12-01

    Within large-river ecosystems, floodplains serve a variety of important ecological functions. A recent survey of 80 managers of floodplain conservation lands along the Upper and Middle Mississippi and Lower Missouri Rivers in the central United States found that the most critical information needed to improve floodplain management centered on metrics for characterizing depth, extent, frequency, duration, and timing of inundation. These metrics can be delivered to managers efficiently through cloud-based interactive maps. To calculate these metrics, we interpolated an existing one-dimensional hydraulic model for the Lower Missouri River, which simulated water surface elevations at cross sections spaced (<1 km) to sufficiently characterize water surface profiles along an approximately 800 km stretch upstream from the confluence with the Mississippi River over an 80-year record at a daily time step. To translate these water surface elevations to inundation depths, we subtracted a merged terrain model consisting of floodplain LIDAR and bathymetric surveys of the river channel. This approach resulted in a 29000+ day time series of inundation depths across the floodplain using grid cells with 30 m spatial resolution. Initially, we used these data on a local workstation to calculate a suite of nine spatially distributed inundation metrics for the entire model domain. These metrics are calculated on a per pixel basis and encompass a variety of temporal criteria generally relevant to flora and fauna of interest to floodplain managers, including, for example, the average number of days inundated per year within a growing season. Using a local workstation, calculating these metrics for the entire model domain requires several hours. However, for the needs of individual floodplain managers working at site scales, these metrics may be too general and inflexible. Instead of creating a priori a suite of inundation metrics able to satisfy all user needs, we present the usage of Google's cloud-based Earth Engine API to allow users to define and query their own inundation metrics from our dataset and produce maps nearly instantaneously. This approach allows users to select the time periods and inundation depths germane to managing local species, potentially facilitating conservation of floodplain ecosystems.

  2. Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM

    NASA Technical Reports Server (NTRS)

    Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip

    2017-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.

  3. Simulation of gaseous pollutant dispersion around an isolated building using the k-ω SST (shear stress transport) turbulence model.

    PubMed

    Yu, Hesheng; Thé, Jesse

    2017-05-01

    The dispersion of gaseous pollutant around buildings is complex due to complex turbulence features such as flow detachment and zones of high shear. Computational fluid dynamics (CFD) models are one of the most promising tools to describe the pollutant distribution in the near field of buildings. Reynolds-averaged Navier-Stokes (RANS) models are the most commonly used CFD techniques to address turbulence transport of the pollutant. This research work studies the use of [Formula: see text] closure model for the gas dispersion around a building by fully resolving the viscous sublayer for the first time. The performance of standard [Formula: see text] model is also included for comparison, along with results of an extensively validated Gaussian dispersion model, the U.S. Environmental Protection Agency (EPA) AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model). This study's CFD models apply the standard [Formula: see text] and the [Formula: see text] turbulence models to obtain wind flow field. A passive concentration transport equation is then calculated based on the resolved flow field to simulate the distribution of pollutant concentrations. The resultant simulation of both wind flow and concentration fields are validated rigorously by extensive data using multiple validation metrics. The wind flow field can be acceptably modeled by the [Formula: see text] model. However, the [Formula: see text] model fails to simulate the gas dispersion. The [Formula: see text] model outperforms [Formula: see text] in both flow and dispersion simulations, with higher hit rates for dimensionless velocity components and higher "factor of 2" of observations (FAC2) for normalized concentration. All these validation metrics of [Formula: see text] model pass the quality assurance criteria recommended by The Association of German Engineers (Verein Deutscher Ingenieure, VDI) guideline. Furthermore, these metrics are better than or the same as those in the literature. Comparison between the performances of [Formula: see text] and AERMOD shows that the CFD simulation is superior to Gaussian-type model for pollutant dispersion in the near wake of obstacles. AERMOD can perform as a screening tool for near-field gas dispersion due to its expeditious calculation and the ability to handle complicated cases. The utilization of [Formula: see text] to simulate gaseous pollutant dispersion around an isolated building is appropriate and is expected to be suitable for complex urban environment. Multiple validation metrics of [Formula: see text] turbulence model in CFD quantitatively indicated that this turbulence model was appropriate for the simulation of gas dispersion around buildings. CFD is, therefore, an attractive alternative to wind tunnel for modeling gas dispersion in urban environment due to its excellent performance, and lower cost.

  4. Detecting population recovery using gametic disequilibrium-based effective population size estimates

    Treesearch

    David A. Tallmon; Robin S. Waples; Dave Gregovich; Michael K. Schwartz

    2012-01-01

    Recovering populations often must meet specific growth rate or abundance targets before their legal status can be changed from endangered or threatened. While the efficacy, power, and performance of population metrics to infer trends in declining populations has received considerable attention, how these same metrics perform when populations are increasing is less...

  5. Language Games: University Responses to Ranking Metrics

    ERIC Educational Resources Information Center

    Heffernan, Troy A.; Heffernan, Amanda

    2018-01-01

    League tables of universities that measure performance in various ways are now commonplace, with numerous bodies providing their own rankings of how institutions throughout the world are seen to be performing on a range of metrics. This paper uses Lyotard's notion of language games to theorise that universities are regaining some power over being…

  6. Design and Implementation of Performance Metrics for Evaluation of Assessments Data

    ERIC Educational Resources Information Center

    Ahmed, Irfan; Bhatti, Arif

    2016-01-01

    Evocative evaluation of assessment data is essential to quantify the achievements at course and program levels. The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes at the course levels for program accreditation. Even though…

  7. 75 FR 26839 - Metrics and Standards for Intercity Passenger Rail Service under Section 207 of the Passenger...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ... performance and service quality of intercity passenger train operations. In compliance with the statute, the FRA and Amtrak jointly drafted performance metrics and standards for intercity passenger rail service... and Standards for Intercity Passenger Rail Service under Section 207 of the Passenger Rail Investment...

  8. Performance evaluation of no-reference image quality metrics for face biometric images

    NASA Astrophysics Data System (ADS)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  9. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  10. Evaluation of solid particle number and black carbon for very low particulate matter emissions standards in light-duty vehicles.

    PubMed

    Chang, M-C Oliver; Shields, J Erin

    2017-06-01

    To reliably measure at the low particulate matter (PM) levels needed to meet California's Low Emission Vehicle (LEV III) 3- and 1-mg/mile particulate matter (PM) standards, various approaches other than gravimetric measurement have been suggested for testing purposes. In this work, a feasibility study of solid particle number (SPN, d50 = 23 nm) and black carbon (BC) as alternatives to gravimetric PM mass was conducted, based on the relationship of these two metrics to gravimetric PM mass, as well as the variability of each of these metrics. More than 150 Federal Test Procedure (FTP-75) or Supplemental Federal Test Procedure (US06) tests were conducted on 46 light-duty vehicles, including port-fuel-injected and direct-injected gasoline vehicles, as well as several light-duty diesel vehicles equipped with diesel particle filters (LDD/DPF). For FTP tests, emission variability of gravimetric PM mass was found to be slightly less than that of either SPN or BC, whereas the opposite was observed for US06 tests. Emission variability of PM mass for LDD/DPF was higher than that of both SPN and BC, primarily because of higher PM mass measurement uncertainties (background and precision) near or below 0.1 mg/mile. While strong correlations were observed from both SPN and BC to PM mass, the slopes are dependent on engine technologies and driving cycles, and the proportionality between the metrics can vary over the course of the test. Replacement of the LEV III PM mass emission standard with one other measurement metric may imperil the effectiveness of emission reduction, as a correlation-based relationship may evolve over future technologies for meeting stringent greenhouse standards. Solid particle number and black carbon were suggested in place of PM mass for the California LEV III 1-mg/mile FTP standard. Their equivalence, proportionality, and emission variability in comparison to PM mass, based on a large light-duty vehicle fleet examined, are dependent on engine technologies and driving cycles. Such empirical derived correlations exhibit the limitation of using these metrics for enforcement and certification standards as vehicle combustion and after-treatment technologies advance.

  11. Multi-mode evaluation of power-maximizing cross-flow turbine controllers

    DOE PAGES

    Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James; ...

    2017-09-21

    A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less

  12. Multi-mode evaluation of power-maximizing cross-flow turbine controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James

    A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less

  13. Health and Well-Being Metrics in Business: The Value of Integrated Reporting.

    PubMed

    Pronk, Nicolaas P; Malan, Daniel; Christie, Gillian; Hajat, Cother; Yach, Derek

    2018-01-01

    Health and well-being (HWB) are material to sustainable business performance. Yet, corporate reporting largely lacks the intentional inclusion of HWB metrics. This brief report presents an argument for inclusion of HWB metrics into existing standards for corporate reporting. A Core Scorecard and a Comprehensive Scorecard, designed by a team of subject matter experts, based on available evidence of effectiveness, and organized around the categories of Governance, Management, and Evidence of Success, may be integrated into corporate reporting efforts. Pursuit of corporate integrated reporting requires corporate governance and ethical leadership and values that ultimately align with environmental, social, and economic performance. Agreement on metrics that intentionally include HWB may allow for integrated reporting that has the potential to yield significant value for business and society alike.

  14. Using Publication Metrics to Highlight Academic Productivity and Research Impact

    PubMed Central

    Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.

    2016-01-01

    This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141

  15. Robustness Metrics: How Are They Calculated, When Should They Be Used and Why Do They Give Different Results?

    NASA Astrophysics Data System (ADS)

    McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.

    2018-02-01

    Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.

  16. Multiple symbol partially coherent detection of MPSK

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1992-01-01

    It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.

  17. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays

    PubMed Central

    Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.

    2016-01-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567

  18. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  19. "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Francis M.; Hemami, Sheila S.

    2007-02-01

    For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.

  20. Quantifying the Metrics That Characterize Safety Culture of Three Engineered Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, Julie; Ernesti, Mary; Tokuhiro, Akira

    2002-07-01

    With potential energy shortages and increasing electricity demand, the nuclear energy option is being reconsidered in the United States. Public opinion will have a considerable voice in policy decisions that will 'road-map' the future of nuclear energy in this country. This report is an extension of the last author's work on the 'safety culture' associated with three engineered systems (automobiles, commercial airplanes, and nuclear power plants) in Japan and the United States. Safety culture, in brief is defined as a specifically developed culture based on societal and individual interpretations of the balance of real, perceived, and imagined risks versus themore » benefits drawn from utilizing a given engineered systems. The method of analysis is a modified scale analysis, with two fundamental Eigen-metrics, time- (t) and number-scales (N) that describe both engineered systems and human factors. The scale analysis approach is appropriate because human perception of risk, perception of benefit and level of (technological) acceptance are inherently subjective, therefore 'fuzzy' and rarely quantifiable in exact magnitude. Perception of risk, expressed in terms of the psychometric factors 'dread risk' and 'unknown risk', contains both time- and number-scale elements. Various engineering system accidents with fatalities, reported by mass media are characterized by t and N, and are presented in this work using the scale analysis method. We contend that level of acceptance infers a perception of benefit at least two orders larger magnitude than perception of risk. The 'amplification' influence of mass media is also deduced as being 100- to 1000-fold the actual number of fatalities/serious injuries in a nuclear-related accident. (authors)« less

  1. Efficiently Selecting the Best Web Services

    NASA Astrophysics Data System (ADS)

    Goncalves, Marlene; Vidal, Maria-Esther; Regalado, Alfredo; Yacoubi Ayadi, Nadia

    Emerging technologies and linking data initiatives have motivated the publication of a large number of datasets, and provide the basis for publishing Web services and tools to manage the available data. This wealth of resources opens a world of possibilities to satisfy user requests. However, Web services may have similar functionality and assess different performance; therefore, it is required to identify among the Web services that satisfy a user request, the ones with the best quality. In this paper we propose a hybrid approach that combines reasoning tasks with ranking techniques to aim at the selection of the Web services that best implement a user request. Web service functionalities are described in terms of input and output attributes annotated with existing ontologies, non-functionality is represented as Quality of Services (QoS) parameters, and user requests correspond to conjunctive queries whose sub-goals impose restrictions on the functionality and quality of the services to be selected. The ontology annotations are used in different reasoning tasks to infer service implicit properties and to augment the size of the service search space. Furthermore, QoS parameters are considered by a ranking metric to classify the services according to how well they meet a user non-functional condition. We assume that all the QoS parameters of the non-functional condition are equally important, and apply the Top-k Skyline approach to select the k services that best meet this condition. Our proposal relies on a two-fold solution which fires a deductive-based engine that performs different reasoning tasks to discover the services that satisfy the requested functionality, and an efficient implementation of the Top-k Skyline approach to compute the top-k services that meet the majority of the QoS constraints. Our Top-k Skyline solution exploits the properties of the Skyline Frequency metric and identifies the top-k services by just analyzing a subset of the services that meet the non-functional condition. We report on the effects of the proposed reasoning tasks, the quality of the top-k services selected by the ranking metric, and the performance of the proposed ranking techniques. Our results suggest that the number of services can be augmented by up two orders of magnitude. In addition, our ranking techniques are able to identify services that have the best values in at least half of the QoS parameters, while the performance is improved.

  2. Comparison of normalized gain and Cohen's d for analyzing gains on concept inventories

    NASA Astrophysics Data System (ADS)

    Nissen, Jayson M.; Talbot, Robert M.; Nasim Thompson, Amreen; Van Dusen, Ben

    2018-06-01

    Measuring student learning is a complicated but necessary task for understanding the effectiveness of instruction and issues of equity in college science, technology, engineering, and mathematics (STEM) courses. Our investigation focused on the implications on claims about student learning that result from choosing between one of two commonly used metrics for analyzing shifts in concept inventories. The metrics are normalized gain (g ), which is the most common method used in physics education research and other discipline based education research fields, and Cohen's d , which is broadly used in education research and many other fields. Data for the analyses came from the Learning About STEM Student Outcomes (LASSO) database and included test scores from 4551 students on physics, chemistry, biology, and math concept inventories from 89 courses at 17 institutions from across the United States. We compared the two metrics across all the concept inventories. The results showed that the two metrics lead to different inferences about student learning and equity due to the finding that g is biased in favor of high pretest populations. We discuss recommendations for the analysis and reporting of findings on student learning data.

  3. SIMPATIQCO: a server-based software suite which facilitates monitoring the time course of LC-MS performance metrics on Orbitrap instruments.

    PubMed

    Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl

    2012-11-02

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.

  4. SIMPATIQCO: A Server-Based Software Suite Which Facilitates Monitoring the Time Course of LC–MS Performance Metrics on Orbitrap Instruments

    PubMed Central

    2012-01-01

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386

  5. Quantification of three-dimensional cell-mediated collagen remodeling using graph theory.

    PubMed

    Bilgin, Cemal Cagatay; Lund, Amanda W; Can, Ali; Plopper, George E; Yener, Bülent

    2010-09-30

    Cell cooperation is a critical event during tissue development. We present the first precise metrics to quantify the interaction between mesenchymal stem cells (MSCs) and extra cellular matrix (ECM). In particular, we describe cooperative collagen alignment process with respect to the spatio-temporal organization and function of mesenchymal stem cells in three dimensions. We defined two precise metrics: Collagen Alignment Index and Cell Dissatisfaction Level, for quantitatively tracking type I collagen and fibrillogenesis remodeling by mesenchymal stem cells over time. Computation of these metrics was based on graph theory and vector calculus. The cells and their three dimensional type I collagen microenvironment were modeled by three dimensional cell-graphs and collagen fiber organization was calculated from gradient vectors. With the enhancement of mesenchymal stem cell differentiation, acceleration through different phases was quantitatively demonstrated. The phases were clustered in a statistically significant manner based on collagen organization, with late phases of remodeling by untreated cells clustering strongly with early phases of remodeling by differentiating cells. The experiments were repeated three times to conclude that the metrics could successfully identify critical phases of collagen remodeling that were dependent upon cooperativity within the cell population. Definition of early metrics that are able to predict long-term functionality by linking engineered tissue structure to function is an important step toward optimizing biomaterials for the purposes of regenerative medicine.

  6. Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.

    IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less

  7. Performance metrics for Inertial Confinement Fusion implosions: aspects of the technical framework for measuring progress in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, B K; Glenzer, S; Edwards, M J

    The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called themore » experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.« less

  8. Thermodynamic efficiency of nonimaging concentrators

    NASA Astrophysics Data System (ADS)

    Shatz, Narkis; Bortz, John; Winston, Roland

    2009-08-01

    The purpose of a nonimaging concentrator is to transfer maximal flux from the phase space of a source to that of a target. A concentrator's performance can be expressed relative to a thermodynamic reference. We discuss consequences of Fermat's principle of geometrical optics. We review étendue dilution and optical loss mechanisms associated with nonimaging concentrators, especially for the photovoltaic (PV) role. We introduce the concept of optical thermodynamic efficiency which is a performance metric combining the first and second laws of thermodynamics. The optical thermodynamic efficiency is a comprehensive metric that takes into account all loss mechanisms associated with transferring flux from the source to the target phase space, which may include losses due to inadequate design, non-ideal materials, fabrication errors, and less than maximal concentration. As such, this metric is a gold standard for evaluating the performance of nonimaging concentrators. Examples are provided to illustrate the use of this new metric. In particular we discuss concentrating PV systems for solar power applications.

  9. Role of quality of service metrics in visual target acquisition and tracking in resource constrained environments

    NASA Astrophysics Data System (ADS)

    Anderson, Monica; David, Phillip

    2007-04-01

    Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.

  10. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-07-08

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray tube displays to flat panel displays may have rendered some of the tests inappropriate for modern use. © 2016 The Authors.

  11. Applying Technology Ranking and Systems Engineering in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Luna, Bernadette (Technical Monitor)

    2000-01-01

    According to the Advanced Life Support (ALS) Program Plan, the Systems Modeling and Analysis Project (SMAP) has two important tasks: 1) prioritizing investments in ALS Research and Technology Development (R&TD), and 2) guiding the evolution of ALS systems. Investments could be prioritized simply by independently ranking different technologies, but we should also consider a technology's impact on system design. Guiding future ALS systems will require SMAP to consider many aspects of systems engineering. R&TD investments can be prioritized using familiar methods for ranking technology. The first step is gathering data on technology performance, safety, readiness level, and cost. Then the technologies are ranked using metrics or by decision analysis using net present economic value. The R&TD portfolio can be optimized to provide the maximum expected payoff in the face of uncertain future events. But more is needed. The optimum ALS system can not be designed simply by selecting the best technology for each predefined subsystem. Incorporating a new technology, such as food plants, can change the specifications of other subsystems, such as air regeneration. Systems must be designed top-down starting from system objectives, not bottom-up from selected technologies. The familiar top-down systems engineering process includes defining mission objectives, mission design, system specification, technology analysis, preliminary design, and detail design. Technology selection is only one part of systems analysis and engineering, and it is strongly related to the subsystem definitions. ALS systems should be designed using top-down systems engineering. R&TD technology selection should consider how the technology affects ALS system design. Technology ranking is useful but it is only a small part of systems engineering.

  12. NASA's Space Launch System: Development and Progress

    NASA Technical Reports Server (NTRS)

    Honeycutt, John; Lyles, Garry

    2016-01-01

    NASA is embarked on a new era of space exploration that will lead to new capabilities, new destinations, and new discoveries by both human and robotic explorers. Today, the International Space Station (ISS), supported by NASA's commercial partners, and robotic probes, are yielding knowledge that will help make this exploration possible. NASA is developing both the Orion crew vehicle and the Space Launch System (SLS) that will carry out a series of increasingly challenging missions that will eventually lead to human exploration of Mars. This paper will discuss the development and progress on the SLS. The SLS architecture was designed to be safe, affordable, and sustainable. The current configuration is the result of literally thousands of trade studies involving cost, performance, mission requirements, and other metrics. The initial configuration of SLS, designated Block 1, will launch a minimum of 70 metric tons (t) into low Earth orbit - significantly greater capability than any current launch vehicle. It is designed to evolve to a capability of 130 t through the use of upgraded main engines, advanced boosters, and a new upper stage. With more payload mass and volume capability than any rocket in history, SLS offers mission planners larger payloads, faster trip times, simpler design, shorter design cycles, and greater opportunity for mission success. Since the program was officially created in fall 2011, it has made significant progress toward first launch readiness of the Block 1 vehicle in 2018. Every major element of SLS continued to make significant progress in 2015. The Boosters element fired Qualification Motor 1 (QM-1) in March 2015, to test the 5-segment motor, including new insulation, joint, and propellant grain designs. The Stages element marked the completion of more than 70 major components of test article and flight core stage tanks. The Liquid Engines element conducted seven test firings of an RS-25 engine under SLS conditions. The Spacecraft/Payload Integration and Evolution element marked completion of the upper stage test article. Major work continues in 2016 as the program continues both flight and development RS-25 engine testing, begins welding test article and flight core stage tanks, completes stage adapter manufacturing, and test fires the second booster qualification motor. This paper will discuss the program's key accomplishments to date and the challenging work ahead for what will be the world's most capable launch vehicle.

  13. NASA's SPACE LAUNCH SYSTEM: Development and Progress

    NASA Technical Reports Server (NTRS)

    Honeycutt, John; Lyles, Garry

    2016-01-01

    NASA is embarked on a new era of space exploration that will lead to new capabilities, new destinations, and new discoveries by both human and robotic explorers. Today, the International Space Station (ISS) and robotic probes are yielding knowledge that will help make this exploration possible. NASA is developing both the Orion crew vehicle and the Space Launch System (SLS) (Figure 1), that will carry out a series of increasingly challenging missions leading to human exploration of Mars. This paper will discuss the development and progress on the SLS. The SLS architecture was designed to be safe, affordable, and sustainable. The current configuration is the result of literally thousands of trade studies involving cost, performance, mission requirements, and other metrics. The initial configuration of SLS, designated Block 1, will launch a minimum of 70 metric tons (mT) (154,324 pounds) into low Earth orbit - significantly greater capability than any current launch vehicle. It is designed to evolve to a capability of 130 mT (286,601 pounds) through the use of upgraded main engines, advanced boosters, and a new upper stage. With more payload mass and volume capability than any existing rocket, SLS offers mission planners larger payloads, faster trip times, simpler design, shorter design cycles, and greater opportunity for mission success. Since the program was officially created in fall 2011, it has made significant progress toward launch readiness in 2018. Every major element of SLS continued to make significant progress in 2015. Engineers fired Qualification Motor 1 (QM-1) in March 2015 to test the 5-segment motor, including new insulation, joint, and propellant grain designs. More than 70 major components of test article and flight hardware for the Core Stage have been manufactured. Seven test firings have been completed with an RS-25 engine under SLS operating conditions. The test article for the Interim Cryogenic Propulsion Stage (ICPS) has also been completed. Major work continues in 2016 as the program continues both flight and development RS-25 engine testing, begins welding test article and flight core stage tanks, completes stage adapter manufacturing, and test fires the second booster qualification motor. This paper will discuss the program's key accomplishments to date and the challenging work ahead for what will be the world's most capable launch vehicle.

  14. A novel patient-centered "intention-to-treat" metric of U.S. lung transplant center performance.

    PubMed

    Maldonado, Dawn A; RoyChoudhury, Arindam; Lederer, David J

    2018-01-01

    Despite the importance of pretransplantation outcomes, 1-year posttransplantation survival is typically considered the primary metric of lung transplant center performance in the United States. We designed a novel lung transplant center performance metric that incorporates both pre- and posttransplantation survival time. We performed an ecologic study of 12 187 lung transplant candidates listed at 56 U.S. lung transplant centers between 2006 and 2012. We calculated an "intention-to-treat" survival (ITTS) metric as the percentage of waiting list candidates surviving at least 1 year after transplantation. The median center-level 1-year posttransplantation survival rate was 84.1%, and the median center-level ITTS was 66.9% (mean absolute difference 19.6%, 95% limits of agreement 4.3 to 35.1%). All but 10 centers had ITTS values that were significantly lower than 1-year posttransplantation survival rates. Observed ITTS was significantly lower than expected ITTS for 7 centers. These data show that one third of lung transplant candidates do not survive 1 year after transplantation, and that 12% of centers have lower than expected ITTS. An "intention-to-treat" survival metric may provide a more realistic expectation of patient outcomes at transplant centers and may be of value to transplant centers and policymakers. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  15. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  16. Managing Reliability in the 21st Century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellin, T.A.

    1998-11-23

    The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heartmore » of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.« less

  17. NERC Policy 10: Measurement of two generation and load balancing IOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spicer, P.J.; Galow, G.G.

    1999-11-01

    Policy 10 will describe specific standards and metrics for most of the reliability functions described in the Interconnected Operations Services Working Group (IOS WG) report. The purpose of this paper is to discuss, in detail, the proposed metrics for two generation and load balancing IOSs: Regulation; Load Following. For purposes of this paper, metrics include both measurement and performance evaluation. The measurement methods discussed are included in the current draft of the proposed Policy 10. The performance evaluation method discussed is offered by the authors for consideration by the IOS ITF (Implementation Task Force) for inclusion into Policy 10.

  18. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  19. Airplane takeoff and landing performance monitoring system

    NASA Technical Reports Server (NTRS)

    Middleton, David B. (Inventor); Srivatsan, Raghavachari (Inventor); Person, Jr., Lee H. (Inventor)

    1991-01-01

    The invention is a real-time takeoff and landing performance monitoring system for an aircraft which provides a pilot with graphic and metric information to assist in decisions related to achieving rotation speed (V.sub.R) within the safe zone of a runway, or stopping the aircraft on the runway after landing or take-off abort. The system processes information in two segments: a pretakeoff segment and a real-time segment. One-time inputs of ambient conditions and airplane configuration information are used in the pretakeoff segment to generate scheduled performance data. The real-time segment uses the scheduled performance data, runway length data and transducer measured parameters to monitor the performance of the airplane throughout the takeoff roll. Airplane and engine performance deficiencies are detected and annunciated. A novel and important feature of this segment is that it updates the estimated runway rolling friction coefficient. Airplane performance predictions also reflect changes in head wind occurring as the takeoff roll progresses. The system provides a head-down display and a head-up display. The head-up display is projected onto a partially reflective transparent surface through which the pilot views the runway. By comparing the present performance of the airplane with a predicted nominal performance based upon given conditions, performance deficiencies are detected by the system.

  20. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

Top