DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrissey, Elmer; O'Donnell, James; Keane, Marcus
2004-03-29
Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less
LIFE CYCLE DESIGN OF AMORPHOUS SILICON PHOTOVOLTAIC MODULES
The life cycle design framework was applied to photovoltaic module design. The primary objective of this project was to develop and evaluate design metrics for assessing and guiding the Improvement of PV product systems. Two metrics were used to assess life cycle energy perform...
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Target detection cycle criteria when using the targeting task performance metric
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Jacobs, Eddie L.; Vollmerhausen, Richard H.
2004-12-01
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate of the US Army (NVESD) has developed a new target acquisition metric to better predict the performance of modern electro-optical imagers. The TTP metric replaces the Johnson criteria. One problem with transitioning to the new model is that the difficulty of searching in a terrain has traditionally been quantified by an "N50." The N50 is the number of Johnson criteria cycles needed for the observer to detect the target half the time, assuming that the observer is not time limited. In order to make use of this empirical data base, a conversion must be found relating Johnson cycles for detection to TTP cycles for detection. This paper describes how that relationship is established. We have found that the relationship between Johnson and TTP is 1:2.7 for the recognition and identification tasks.
Multidisciplinary life cycle metrics and tools for green buildings.
Helgeson, Jennifer F; Lippiatt, Barbara C
2009-07-01
Building sector stakeholders need compelling metrics, tools, data, and case studies to support major investments in sustainable technologies. Proponents of green building widely claim that buildings integrating sustainable technologies are cost effective, but often these claims are based on incomplete, anecdotal evidence that is difficult to reproduce and defend. The claims suffer from 2 main weaknesses: 1) buildings on which claims are based are not necessarily "green" in a science-based, life cycle assessment (LCA) sense and 2) measures of cost effectiveness often are not based on standard methods for measuring economic worth. Yet, the building industry demands compelling metrics to justify sustainable building designs. The problem is hard to solve because, until now, neither methods nor robust data supporting defensible business cases were available. The US National Institute of Standards and Technology (NIST) Building and Fire Research Laboratory is beginning to address these needs by developing metrics and tools for assessing the life cycle economic and environmental performance of buildings. Economic performance is measured with the use of standard life cycle costing methods. Environmental performance is measured by LCA methods that assess the "carbon footprint" of buildings, as well as 11 other sustainability metrics, including fossil fuel depletion, smog formation, water use, habitat alteration, indoor air quality, and effects on human health. Carbon efficiency ratios and other eco-efficiency metrics are established to yield science-based measures of the relative worth, or "business cases," for green buildings. Here, the approach is illustrated through a realistic building case study focused on different heating, ventilation, air conditioning technology energy efficiency. Additionally, the evolution of the Building for Environmental and Economic Sustainability multidisciplinary team and future plans in this area are described.
Brown, Nicholas R.; Powers, Jeffrey J.; Feng, B.; ...
2015-05-21
This paper presents analyses of possible reactor representations of a nuclear fuel cycle with continuous recycling of thorium and produced uranium (mostly U-233) with thorium-only feed. The analysis was performed in the context of a U.S. Department of Energy effort to develop a compendium of informative nuclear fuel cycle performance data. The objective of this paper is to determine whether intermediate spectrum systems, having a majority of fission events occurring with incident neutron energies between 1 eV and 10 5 eV, perform as well as fast spectrum systems in this fuel cycle. The intermediate spectrum options analyzed include tight latticemore » heavy or light water-cooled reactors, continuously refueled molten salt reactors, and a sodium-cooled reactor with hydride fuel. All options were modeled in reactor physics codes to calculate their lattice physics, spectrum characteristics, and fuel compositions over time. Based on these results, detailed metrics were calculated to compare the fuel cycle performance. These metrics include waste management and resource utilization, and are binned to accommodate uncertainties. The performance of the intermediate systems for this selfsustaining thorium fuel cycle was similar to a representative fast spectrum system. However, the number of fission neutrons emitted per neutron absorbed limits performance in intermediate spectrum systems.« less
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The Advanced Life Support (ALS) has used a single number, Equivalent System Mass (ESM), for both reporting progress and technology selection. ESM is the launch mass required to provide a space system. ESM indicates launch cost. ESM alone is inadequate for technology selection, which should include other metrics such as Technology Readiness Level (TRL) and Life Cycle Cost (LCC) and also consider perfom.arxe 2nd risk. ESM has proven difficult to implement as a reporting metric, partly because it includes non-mass technology selection factors. Since it will not be used exclusively for technology selection, a new reporting metric can be made easier to compute and explain. Systems design trades-off performance, cost, and risk, but a risk weighted cost/benefit metric would be too complex to report. Since life support has fixed requirements, different systems usually have roughly equal performance. Risk is important since failure can harm the crew, but it is difficult to treat simply. Cost is not easy to estimate, but preliminary space system cost estimates are usually based on mass, which is better estimated than cost. Amass-based cost estimate, similar to ESM, would be a good single reporting metric. The paper defines and compares four mass-based cost estimates, Equivalent Mass (EM), Equivalent System Mass (ESM), Life Cycle Mass (LCM), and System Mass (SM). EM is traditional in life support and includes mass, volume, power, cooling and logistics. ESM is the specifically defined ALS metric, which adds crew time and possibly other cost factors to EM. LCM is a new metric, a mass-based estimate of LCC measured in mass units. SM includes only the factors of EM that are originally measured in mass, the hardware and logistics mass. All four mass-based metrics usually give similar comparisons. SM is by far the simplest to compute and easiest to explain.
Key metrics for HFIR HEU and LEU models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Betzler, Benjamin R.; Chandler, David
This report compares key metrics for two fuel design models of the High Flux Isotope Reactor (HFIR). The first model represents the highly enriched uranium (HEU) fuel currently in use at HFIR, and the second model considers a low-enriched uranium (LEU) interim design fuel. Except for the fuel region, the two models are consistent, and both include an experiment loading that is representative of HFIR's current operation. The considered key metrics are the neutron flux at the cold source moderator vessel, the mass of 252Cf produced in the flux trap target region as function of cycle time, the fast neutronmore » flux at locations of interest for material irradiation experiments, and the reactor cycle length. These key metrics are a small subset of the overall HFIR performance and safety metrics. They were defined as a means of capturing data essential for HFIR's primary missions, for use in optimization studies assessing the impact of HFIR's conversion from HEU fuel to different types of LEU fuel designs.« less
Approaches to Cycle Analysis and Performance Metrics
NASA Technical Reports Server (NTRS)
Parson, Daniel E.
2003-01-01
The following notes were prepared as part of an American Institute of Aeronautics and Astronautics (AIAA) sponsored short course entitled Air Breathing Pulse Detonation Engine (PDE) Technology. The course was presented in January of 2003, and again in July of 2004 at two different AIAA meetings. It was taught by seven instructors, each of whom provided information on particular areas of PDE research. These notes cover two areas. The first is titled Approaches to Cycle Analysis and Performance Metrics. Here, the various methods of cycle analysis are introduced. These range from algebraic, thermodynamic equations, to single and multi-dimensional Computational Fluid Dynamic (CFD) solutions. Also discussed are the various means by which performance is measured, and how these are applied in a device which is fundamentally unsteady. The second topic covered is titled PDE Hybrid Applications. Here the concept of coupling a PDE to a conventional turbomachinery based engine is explored. Motivation for such a configuration is provided in the form of potential thermodynamic benefits. This is accompanied by a discussion of challenges to the technology.
Life cycle design metrics for energy generation technologies: Method, data, and case study
NASA Astrophysics Data System (ADS)
Cooper, Joyce; Lee, Seung-Jin; Elter, John; Boussu, Jeff; Boman, Sarah
A method to assist in the rapid preparation of Life Cycle Assessments of emerging energy generation technologies is presented and applied to distributed proton exchange membrane fuel cell systems. The method develops life cycle environmental design metrics and allows variations in hardware materials, transportation scenarios, assembly energy use, operating performance and consumables, and fuels and fuel production scenarios to be modeled and comparisons to competing systems to be made. Data and results are based on publicly available U.S. Life Cycle Assessment data sources and are formulated to allow the environmental impact weighting scheme to be specified. A case study evaluates improvements in efficiency and in materials recycling and compares distributed proton exchange membrane fuel cell systems to other distributed generation options. The results reveal the importance of sensitivity analysis and system efficiency in interpreting case studies.
ENVIRONMENTAL COMPARISON METRICS FOR LIFE CYCLE IMPACT ASSESSMENT AND PROCESS DESIGN
Metrics (potentials, potency factors, equivalency factors or characterization factors) are available to support the environmental comparison of alternatives in application domains like proces design and product life-cycle assessment (LCA). These metrics typically provide relative...
Integrating automated support for a software management cycle into the TAME system
NASA Technical Reports Server (NTRS)
Sunazuka, Toshihiko; Basili, Victor R.
1989-01-01
Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.
RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade
2014-09-30
Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the
Fuel Cycle Performance of Thermal Spectrum Small Modular Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worrall, Andrew; Todosow, Michael
2016-01-01
Small modular reactors may offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of small modular reactors on the nuclear fuel cycle and fuel cycle performance. The focus of this paper is on the fuel cycle impacts of light water small modular reactors in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy Office of Nuclear Energy Fuel Cycle Options Campaign. Challenges with small modular reactors include:more » increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burn-up in the reactor and the fuel cycle performance. This paper summarizes the results of an expert elicitation focused on developing a list of the factors relevant to small modular reactor fuel, core, and operation that will impact fuel cycle performance. Preliminary scoping analyses were performed using a regulatory-grade reactor core simulator. The hypothetical light water small modular reactor considered in these preliminary scoping studies is a cartridge type one-batch core with 4.9% enrichment. Some core parameters, such as the size of the reactor and general assembly layout, are similar to an example small modular reactor concept from industry. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burn-up of the reactor. Fuel cycle performance metrics for a small modular reactor are compared to a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. Metrics performance for a small modular reactor are degraded for mass of spent nuclear fuel and high level waste disposed, mass of depleted uranium disposed, land use per energy generated, and carbon emission per energy generated« less
Park, Sung Wook; Brenneman, Michael; Cooke, William H; Cordova, Alberto; Fogt, Donovan
The purpose was to determine if heart rate (HR) and heart rate variability (HRV) responses would reflect anaerobic threshold (AT) using a discontinuous, incremental, cycle test. AT was determined by ventilatory threshold (VT). Cyclists (30.6±5.9y; 7 males, 8 females) completed a discontinuous cycle test consisting of 7 stages (6 min each with 3 min of rest between). Three stages were performed at power outputs (W) below those corresponding to a previously established AT, one at W corresponding to AT, and 3 at W above those corresponding to AT. The W at the intersection of the trend lines was considered each metric's "threshold". The averaged stage data for Ve, HR, and time- and frequency-domain HRV metrics were plotted versus W. The W at the "threshold" for the metrics of interest were compared using correlation analysis and paired-sample t -test. In all, several heart rate-related parameters accurately reflected AT with significant correlations (p≤0.05) were observed between AT W and HR, mean RR interval (MRR), low and high frequency spectral energy (LF and HR, respectively), high frequency peak (fHF), and HFxfHF metrics' threshold W (i.e., MRRTW, etc.). Differences in HR or HRV metric threshold W and AT for all subjects were less than 14 W. The steady state data from discontinuous protocols may allow for a true indication of steady-state physiologic stress responses and corresponding W at AT, compared to continuous protocols using 1-2 min exercise stages.
Rechargeable Zinc Alkaline Anodes for Long-Cycle Energy Storage
Turney, Damon E.; Gallaway, Joshua W.; Yadav, Gautam G.; ...
2017-05-03
Zinc alkaline anodes command significant share of consumer battery markets and are a key technology for the emerging grid-scale battery market. Improved understanding of this electrode is required for long-cycle deployments at kWh and MWh scale due to strict requirements on performance, cost, and safety. For this article, we give a modern literature survey of zinc alkaline anodes with levelized performance metrics and also present an experimental assessment of leading formulations. Long-cycle materials characterization, performance metrics, and failure analysis are reported for over 25 unique anode formulations with up to 1500 cycles and ~1.5 years of shelf life per test.more » Statistical repeatability of these measurements is made for a baseline design (fewest additives) via 15 duplicates. Baseline design capacity density is 38 mAh per mL of anode volume, and lifetime throughput is 72 Ah per mL of anode volume. We then report identical measurements for anodes with improved material properties via additives and other perturbations, some of which achieve capacity density over 192 mAh per mL of anode volume and lifetime throughput of 190 Ah per mL of anode volume. Novel in operando X-ray microscopy of a cycling zinc paste anode reveals the formation of a nanoscale zinc material that cycles electrochemically and replaces the original anode structure over long-cycle life. Ex situ elemental mapping and other materials characterization suggest that the key physical processes are hydrogen evolution reaction (HER), growth of zinc oxide nanoscale material, concentration deficits of OH – and ZnOH 4 2–, and electrodeposition of Zn growths outside and through separator membranes.« less
Rechargeable Zinc Alkaline Anodes for Long-Cycle Energy Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turney, Damon E.; Gallaway, Joshua W.; Yadav, Gautam G.
Zinc alkaline anodes command significant share of consumer battery markets and are a key technology for the emerging grid-scale battery market. Improved understanding of this electrode is required for long-cycle deployments at kWh and MWh scale due to strict requirements on performance, cost, and safety. For this article, we give a modern literature survey of zinc alkaline anodes with levelized performance metrics and also present an experimental assessment of leading formulations. Long-cycle materials characterization, performance metrics, and failure analysis are reported for over 25 unique anode formulations with up to 1500 cycles and ~1.5 years of shelf life per test.more » Statistical repeatability of these measurements is made for a baseline design (fewest additives) via 15 duplicates. Baseline design capacity density is 38 mAh per mL of anode volume, and lifetime throughput is 72 Ah per mL of anode volume. We then report identical measurements for anodes with improved material properties via additives and other perturbations, some of which achieve capacity density over 192 mAh per mL of anode volume and lifetime throughput of 190 Ah per mL of anode volume. Novel in operando X-ray microscopy of a cycling zinc paste anode reveals the formation of a nanoscale zinc material that cycles electrochemically and replaces the original anode structure over long-cycle life. Ex situ elemental mapping and other materials characterization suggest that the key physical processes are hydrogen evolution reaction (HER), growth of zinc oxide nanoscale material, concentration deficits of OH – and ZnOH 4 2–, and electrodeposition of Zn growths outside and through separator membranes.« less
Koeppen Bioclimatic Metrics for Evaluating CMIP5 Simulations of Historical Climate
NASA Astrophysics Data System (ADS)
Phillips, T. J.; Bonfils, C.
2012-12-01
The classic Koeppen bioclimatic classification scheme associates generic vegetation types (e.g. grassland, tundra, broadleaf or evergreen forests, etc.) with regional climate zones defined by the observed amplitude and phase of the annual cycles of continental temperature (T) and precipitation (P). Koeppen classification thus can provide concise, multivariate metrics for evaluating climate model performance in simulating the regional magnitudes and seasonalities of climate variables that are of critical importance for living organisms. In this study, 14 Koeppen vegetation types are derived from annual-cycle climatologies of T and P in some 3 dozen CMIP5 simulations of 1980-1999 climate, a period when observational data provides a reliable global validation standard. Metrics for evaluating the ability of the CMIP5 models to simulate the correct locations and areas of the vegetation types, as well as measures of overall model performance, also are developed. It is found that the CMIP5 models are most deficient in simulating 1) the climates of the drier zones (e.g. desert, savanna, grassland, steppe vegetation types) that are located in the Southwestern U.S. and Mexico, Eastern Europe, Southern Africa, and Central Australia, as well as 2) the climate of regions such as Central Asia and Western South America where topography plays a central role. (Detailed analysis of regional biases in the annual cycles of T and P of selected simulations exemplifying general model performance problems also are to be presented.) The more encouraging results include evidence for a general improvement in CMIP5 performance relative to that of older CMIP3 models. Within CMIP5 also, the more complex Earth Systems Models (ESMs) with prognostic biogeochemistry perform comparably to the corresponding global models that simulate only the "physical" climate. Acknowledgments This work was funded by the U.S. Department of Energy Office of Science and was performed at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Evaluation Metrics for Simulations of Tropical South America
NASA Astrophysics Data System (ADS)
Gallup, S.; Baker, I. T.; Denning, A. S.; Cheeseman, M.; Haynes, K. D.; Phillips, M.
2017-12-01
The evergreen broadleaf forest of the Amazon Basin is the largest rainforest on earth, and has teleconnections to global climate and carbon cycle characteristics. This region defies simple characterization, spanning large gradients in total rainfall and seasonal variability. Broadly, the region can be thought of as trending from light-limited in its wettest areas to water-limited near the ecotone, with individual landscapes possibly exhibiting the characteristics of either (or both) limitations during an annual cycle. A basin-scale classification of mean behavior has been elusive, and ecosystem response to seasonal cycles and anomalous drought events has resulted in some disagreement in the literature, to say the least. However, new observational platforms and instruments make characterization of the heterogeneity and variability more feasible.To evaluate simulations of ecophysiological function, we develop metrics that correlate various observational products with meteorological variables such as precipitation and radiation. Observations include eddy covariance fluxes, Solar Induced Fluorescence (SIF, from GOME2 and OCO2), biomass and vegetation indices. We find that the modest correlation between SIF and precipitation decreases with increasing annual precipitation, although the relationship is not consistent between products. Biomass increases with increasing precipitation. Although vegetation indices are generally correlated with biomass and precipitation, they can saturate or experience retrieval issues during cloudy periods.Using these observational products and relationships, we develop a set of model evaluation metrics. These metrics are designed to call attention to models that get "the right answer only if it's for the right reason," and provide an opportunity for more critical evaluation of model physics. These metrics represent a testbed that can be applied to multiple models as a means to evaluate their performance in tropical South America.
Brown, Nicholas R.; Worrall, Andrew; Todosow, Michael
2016-11-18
Small modular reactors (SMRs) offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of SMRs on nuclear fuel cycle performance. The focus of this paper is the fuel cycle impacts of light water SMRs in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary example reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy, Office of Nuclear Energy, Fuel Cycle Options Campaign. The hypothetical light water SMR example case considered in these preliminary scoping studies ismore » a cartridge type one-batch core with slightly less than 5.0% enrichment. Challenges associated with SMRs include increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burnup in the reactor and the fuel cycle performance. This paper summarizes a list of the factors relevant to SMR fuel, core, and operation that will impact fuel cycle performance. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burnup of the reactor. Fuel cycle performance metrics for a hypothetical example SMR are compared with those for a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. The metrics performance for such an SMR is degraded for the mass of spent nuclear fuel and high-level waste disposed of, mass of depleted uranium disposed of, land use per energy generated, and carbon emissions per energy generated. Finally, it is noted that the features of some SMR designs impact three main aspects of fuel cycle performance: (1) small cores which means high leakage (there is a radial and axial component), (2) no boron which means heterogeneous core and extensive use of control rods and BPs, and (3) single batch cores. But not all of the SMR designs have all of these traits. As a result, the approach used in this study is therefore a bounding case and not all SMRs may be affected to the same extent.« less
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The ALS project plan goals are reducing cost, improving performance, and achieving flight readiness. ALS selects projects to advance the mission readiness of low cost, high performance technologies. The role of metrics is to help select good projects and report progress. The Equivalent Mass (EM) of a system is the sum of the estimated mass of the hardware, of its required materials and spares, and of the pressurized volume, power supply, and cooling system needed to support the hardware in space. EM is the total payload launch mass needed to provide and support a system. EM is directly proportional to the launch cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Worrall, Andrew; Todosow, Michael
Small modular reactors (SMRs) offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of SMRs on nuclear fuel cycle performance. The focus of this paper is the fuel cycle impacts of light water SMRs in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary example reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy, Office of Nuclear Energy, Fuel Cycle Options Campaign. The hypothetical light water SMR example case considered in these preliminary scoping studies ismore » a cartridge type one-batch core with slightly less than 5.0% enrichment. Challenges associated with SMRs include increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burnup in the reactor and the fuel cycle performance. This paper summarizes a list of the factors relevant to SMR fuel, core, and operation that will impact fuel cycle performance. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burnup of the reactor. Fuel cycle performance metrics for a hypothetical example SMR are compared with those for a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. The metrics performance for such an SMR is degraded for the mass of spent nuclear fuel and high-level waste disposed of, mass of depleted uranium disposed of, land use per energy generated, and carbon emissions per energy generated. Finally, it is noted that the features of some SMR designs impact three main aspects of fuel cycle performance: (1) small cores which means high leakage (there is a radial and axial component), (2) no boron which means heterogeneous core and extensive use of control rods and BPs, and (3) single batch cores. But not all of the SMR designs have all of these traits. As a result, the approach used in this study is therefore a bounding case and not all SMRs may be affected to the same extent.« less
Area of Concern: a new paradigm in life cycle assessment for the development of footprint metrics
Purpose: As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplac...
Software metrics: Software quality metrics for distributed systems. [reliability engineering
NASA Technical Reports Server (NTRS)
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
Life Cycle analysis data and results for geothermal and other electricity generation technologies
Sullivan, John
2013-06-04
Life cycle analysis (LCA) is an environmental assessment method that quantifies the environmental performance of a product system over its entire lifetime, from cradle to grave. Based on a set of relevant metrics, the method is aptly suited for comparing the environmental performance of competing products systems. This file contains LCA data and results for electric power production including geothermal power. The LCA for electric power has been broken down into two life cycle stages, namely plant and fuel cycles. Relevant metrics include the energy ratio and greenhouse gas (GHG) ratios, where the former is the ratio of system input energy to total lifetime electrical energy out and the latter is the ratio of the sum of all incurred greenhouse gases (in CO2 equivalents) divided by the same energy output. Specific information included herein are material to power (MPR) ratios for a range of power technologies for conventional thermoelectric, renewables (including three geothermal power technologies), and coproduced natural gas/geothermal power. For the geothermal power scenarios, the MPRs include the casing, cement, diesel, and water requirements for drilling wells and topside piping. Also included herein are energy and GHG ratios for plant and fuel cycle stages for the range of considered electricity generating technologies. Some of this information are MPR data extracted directly from the literature or from models (eg. ICARUS – a subset of ASPEN models) and others (energy and GHG ratios) are results calculated using GREET models and MPR data. MPR data for wells included herein were based on the Argonne well materials model and GETEM well count results.
Iordan, Cristina; Lausselet, Carine; Cherubini, Francesco
2016-12-15
This study assesses the environmental sustainability of electricity production through anaerobic co-digestion of sewage sludge and organic wastes. The analysis relies on primary data from a biogas plant, supplemented with data from the literature. The climate impact assessment includes emissions of near-term climate forcers (NTCFs) like ozone precursors and aerosols, which are frequently overlooked in Life Cycle Assessment (LCA), and the application of a suite of different emission metrics, based on either the Global Warming Potential (GWP) or the Global Temperature change Potential (GTP) with a time horizon (TH) of 20 or 100 years. The environmental performances of the biogas system are benchmarked against a conventional fossil fuel system. We also investigate the sensitivity of the system to critical parameters and provide five different scenarios in a sensitivity analysis. Hotspots are the management of the digestate (mainly due to the open storage) and methane (CH 4 ) losses during the anaerobic co-digestion. Results are sensitive to the type of climate metric used. The impacts range from 52 up to 116 g CO 2 -eq./MJ electricity when using GTP100 and GWP20, respectively. This difference is mostly due to the varying contribution from CH 4 emissions. The influence of NTCFs is about 6% for GWP100 (worst case), and grows up to 31% for GWP20 (best case). The biogas system has a lower performance than the fossil reference system for the acidification and particulate matter formation potentials. We argue for an active consideration of NTCFs in LCA and a critical reflection over the climate metrics to be used, as these aspects can significantly affect the final outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russ, M; Ionita, C; Bednarek, D
Purpose: In endovascular image-guided neuro-interventions, visualization of fine detail is paramount. For example, the ability of the interventionist to visualize the stent struts depends heavily on the x-ray imaging detector performance. Methods: A study to examine the relative performance of the high resolution MAF-CMOS (pixel size 75µm, Nyquist frequency 6.6 cycles/mm) and a standard Flat Panel Detector (pixel size 194µm, Nyquist frequency 2.5 cycles/mm) detectors in imaging a neuro stent was done using the Generalized Measured Relative Object Detectability (GM-ROD) metric. Low quantum noise images of a deployed stent were obtained by averaging 95 frames obtained by both detectors withoutmore » changing other exposure or geometric parameters. The square of the Fourier transform of each image is taken and divided by the generalized normalized noise power spectrum to give an effective measured task-specific signal-to-noise ratio. This expression is then integrated from 0 to each of the detector’s Nyquist frequencies, and the GM-ROD value is determined by taking a ratio of the integrals for the MAF-CMOS to that of the FPD. The lower bound of integration can be varied to emphasize high frequencies in the detector comparisons. Results: The MAF-CMOS detector exhibits vastly superior performance over the FPD when integrating over all frequencies, yielding a GM-ROD value of 63.1. The lower bound of integration was stepped up in increments of 0.5 cycles/mm for higher frequency comparisons. As the lower bound increased, the GM-ROD value was augmented, reflecting the superior performance of the MAF-CMOS in the high frequency regime. Conclusion: GM-ROD is a versatile metric that can provide quantitative detector and task dependent comparisons that can be used as a basis for detector selection. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less
Area of Concern: a new paradigm in life cycle assessment for ...
Purpose: As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplace. In response, a task force operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related terminology as well as to discuss modelling implications.MethodsThe task force has worked from the perspective that footprints should be based on LCA methodology, underpinned by the same data systems and models as used in LCA. However, there are important differences in purpose and orientation relative to LCA impact category indicators. Footprints have a primary orientation toward society and nontechnical stakeholders. They are also typically of narrow scope, having the purpose of reporting only in relation to specific topics. In comparison, LCA has a primary orientation toward stakeholders interested in comprehensive evaluation of overall environmental performance and trade-offs among impact categories. These differences create tension between footprints, the existing LCIA framework based on the area of protection paradigm and the core LCA standards ISO14040/44.Res
C3 generic workstation: Performance metrics and applications
NASA Technical Reports Server (NTRS)
Eddy, Douglas R.
1988-01-01
The large number of integrated dependent measures available on a command, control, and communications (C3) generic workstation under development are described. In this system, embedded communications tasks will manipulate workload to assess the effects of performance-enhancing drugs (sleep aids and decongestants), work/rest cycles, biocybernetics, and decision support systems on performance. Task performance accuracy and latency will be event coded for correlation with other measures of voice stress and physiological functioning. Sessions will be videotaped to score non-verbal communications. Physiological recordings include spectral analysis of EEG, ECG, vagal tone, and EOG. Subjective measurements include SWAT, fatigue, POMS and specialized self-report scales. The system will be used primarily to evaluate the effects on performance of drugs, work/rest cycles, and biocybernetic concepts. Performance assessment algorithms will also be developed, including those used with small teams. This system provides a tool for integrating and synchronizing behavioral and psychophysiological measures in a complex decision-making environment.
Evaluating CMIP5 Simulations of Historical Continental Climate with Koeppen Bioclimatic Metrics
NASA Astrophysics Data System (ADS)
Phillips, T. J.; Bonfils, C.
2013-12-01
The classic Koeppen bioclimatic classification scheme associates generic vegetation types (e.g. grassland, tundra, broadleaf or evergreen forests, etc.) with regional climate zones defined by their annual cycles of continental temperature (T) and precipitation (P), considered together. The locations or areas of Koeppen vegetation types derived from observational data thus can provide concise metrical standards for simultaneously evaluating climate simulations of T and P in naturally defined regions. The CMIP5 models' collective ability to correctly represent two variables that are critically important for living organisms at regional scales is therefore central to this evaluation. For this study, 14 Koeppen vegetation types are derived from annual-cycle climatologies of T and P in some 3 dozen CMIP5 simulations of the 1980-1999 period. Metrics for evaluating the ability of the CMIP5 models to simulate the correct locations and areas of each vegetation type, as well as measures of overall model performance, also are developed. It is found that the CMIP5 models are generally most deficient in simulating: 1) climates of drier Koeppen zones (e.g. desert, savanna, grassland, steppe vegetation types) located in the southwestern U.S. and Mexico, eastern Europe, southern Africa, and central Australia; 2) climates of regions such as central Asia and western South America where topography plays a key role. Details of regional T or P biases in selected simulations that exemplify general model performance problems also will be presented. Acknowledgments: This work was funded by the U.S. Department of Energy Office of Science and was performed at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Map of Koeppen vegetation types derived from observed T and P.
Kandemir, Utku; Herfat, Safa; Herzog, Mary; Viscogliosi, Paul; Pekmezci, Murat
2017-02-01
The goal of this study is to compare the fatigue strength of a locking intramedullary nail (LN) construct with a double locking plate (DLP) construct in comminuted proximal extra-articular tibia fractures. Eight pairs of fresh frozen cadaveric tibias with low bone mineral density [age: 80 ± 7 (SD) years, T-score: -2.3 ± 1.2] were used. One tibia from each pair was fixed with LN, whereas the contralateral side was fixed with DLP for complex extra-articular multifragmentary metaphyseal fractures (simulating OTA 41-A3.3). Specimens were cyclically loaded under compression simulating single-leg stance by staircase method out to 260,000 cycles. Every 2500 cycles, localized gap displacements were measured with a 3D motion tracking system, and x-ray images of the proximal tibia were acquired. To allow for mechanical settling, initial metrics were calculated at 2500 cycles. The 2 groups were compared regarding initial construct stiffness, initial medial and lateral gap displacements, stiffness at 30,000 cycles, medial and lateral gap displacements at 30,000 cycles, failure load, number of cycles to failure, and failure mode. Failure metrics were reported for initial and catastrophic failures. DLP constructs exhibited higher initial stiffness and stiffness at 30,000 cycles compared with LN constructs (P < 0.03). There were no significant differences between groups for loads at failure or cycles to failure. For the fixation of extra-articular proximal tibia fractures, a LN provides a similar fatigue performance to double locked plates. The locked nail could be safely used for fixation of proximal tibia fractures with the advantage of limited extramedullary soft tissue damage.
NASA Technical Reports Server (NTRS)
Martinez, Jacqueline; Cowings, Patricia S.; Toscano, William B.
2012-01-01
In space, astronauts may experience effects of cumulative sleep loss due to demanding work schedules that can result in cognitive performance impairments, mood state deteriorations, and sleep-wake cycle disruption. Individuals who experience sleep deprivation of six hours beyond normal sleep times experience detrimental changes in their mood and performance states. Hence, the potential for life threatening errors increases exponentially with sleep deprivation. We explored the effects of 36-hours of sleep deprivation on cognitive performance, mood states, and physiological responses to identify which metrics may best predict fatigue induced performance decrements of individuals.
Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models
Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles; ...
2016-06-08
In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less
Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles
In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less
Using TRACI for Sustainability Metrics
TRACI, the Tool for the Reduction and Assessment of Chemical and other environmental Impacts, has been developed for sustainability metrics, life cycle impact assessment, and product and process design impact assessment for developing increasingly sustainable products, processes,...
Mian, Adnan Noor; Fatima, Mehwish; Khan, Raees; Prakash, Ravi
2014-01-01
Energy efficiency is an important design paradigm in Wireless Sensor Networks (WSNs) and its consumption in dynamic environment is even more critical. Duty cycling of sensor nodes is used to address the energy consumption problem. However, along with advantages, duty cycle aware networks introduce some complexities like synchronization and latency. Due to their inherent characteristics, many traditional routing protocols show low performance in densely deployed WSNs with duty cycle awareness, when sensor nodes are supposed to have high mobility. In this paper we first present a three messages exchange Lightweight Random Walk Routing (LRWR) protocol and then evaluate its performance in WSNs for routing low data rate packets. Through NS-2 based simulations, we examine the LRWR protocol by comparing it with DYMO, a widely used WSN protocol, in both static and dynamic environments with varying duty cycles, assuming the standard IEEE 802.15.4 in lower layers. Results for the three metrics, that is, reliability, end-to-end delay, and energy consumption, show that LRWR protocol outperforms DYMO in scalability, mobility, and robustness, showing this protocol as a suitable choice in low duty cycle and dense WSNs.
Developing the User Experience for a Next Generation Nuclear Fuel Cycle Simulator (NGFCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Paul H.; Schneider, Erich; Pascucci, Valerio
This project made substantial progress on its original aim for providing a modern user experience for nuclear fuel cycle analysis while also creating a robust and functional next- generation fuel cycle simulator. The Cyclus kernel experienced a dramatic clari cation of its interfaces and data model, becoming a full- edged agent-based framework, with strong support for third party developers of novel archetypes. The most important contribution of this project to the the development of Cyclus was the introduction of tools to facilitate archetype development. These include automated code generation of routine archetype components, metadata annotations to provide re ection andmore » rich description of each data member's purpose, and mechanisms for input validation and output of complex data. A comprehensive social science investigation of decision makers' interests in nuclear fuel cycles, and speci cally their interests in nuclear fuel cycle simulators (NFCSs) as tools for understanding nuclear fuel cycle options, was conducted. This included document review and analysis, stakeholder interviews, and a survey of decision makers. This information was used to study the role of visualization formats and features in communicating information about nuclear fuel cycles. A exible and user-friendly tool was developed for building Cyclus analysis models, featuring a drag-and-drop interface and automatic input form generation for novel archetypes. Cycic allows users to design fuel cycles from arbitrary collections of facilities for the rst time, with mechanisms that contribute to consistency within that fuel cycle. Interacting with some of the metadata capabilities introduced in the above-mentioned tools to support archetype development, Cycic also automates the generation of user input forms for novel archetypes with little to no special knowledge required by the archetype developers. Translation of the fundamental metrics of Cyclus into more interesting quantities is accomplished in the Cymetric python package. This package is speci cally designed to support the introduction of new metrics by building upon existing metrics. This concept allows for multiple dependencies and encourages building complex metrics out of incremental transformations to those prior metrics. New archetype developers can contribute their own archetype-speci c metric using the same capability. A simple demonstration of this capability focused on generating time-dependent cash ows for reactor deployment that could then be analyzed in di erent ways. Cyclist, a dedicated application for exploration of Cyclus results, was developed. It's primary capabilities at this stage are best-suited to experienced fuel cycle analysts, but it provides a basic platform for simpler visualizations for other audiences. An important part of its interface is the ability to uidly examine di erent slices of what is fundamentally a ve-dimensional sparse data set. A drag-and-drop interface simpli es the process of selecting which data is displayed in the plot as well as which dimensions are being used for« less
A consistent conceptual framework for applying climate metrics in technology life cycle assessment
NASA Astrophysics Data System (ADS)
Mallapragada, Dharik; Mignone, Bryan K.
2017-07-01
Comparing the potential climate impacts of different technologies is challenging for several reasons, including the fact that any given technology may be associated with emissions of multiple greenhouse gases when evaluated on a life cycle basis. In general, analysts must decide how to aggregate the climatic effects of different technologies, taking into account differences in the properties of the gases (differences in atmospheric lifetimes and instantaneous radiative efficiencies) as well as different technology characteristics (differences in emission factors and technology lifetimes). Available metrics proposed in the literature have incorporated these features in different ways and have arrived at different conclusions. In this paper, we develop a general framework for classifying metrics based on whether they measure: (a) cumulative or end point impacts, (b) impacts over a fixed time horizon or up to a fixed end year, and (c) impacts from a single emissions pulse or from a stream of pulses over multiple years. We then use the comparison between compressed natural gas and gasoline-fueled vehicles to illustrate how the choice of metric can affect conclusions about technologies. Finally, we consider tradeoffs involved in selecting a metric, show how the choice of metric depends on the framework that is assumed for climate change mitigation, and suggest which subset of metrics are likely to be most analytically self-consistent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russ, M; Nagesh, S Setlur; Ionita, C
2015-06-15
Purpose: To evaluate the task specific imaging performance of a new 25µm pixel pitch, 1000µm thick amorphous selenium direct detection system with CMOS readout for typical angiographic exposure parameters using the relative object detectability (ROD) metric. Methods: The ROD metric uses a simulated object function weighted at each spatial frequency by the detectors’ detective quantum efficiency (DQE), which is an intrinsic performance metric. For this study, the simulated objects were aluminum spheres of varying diameter (0.05–0.6mm). The weighted object function is then integrated over the full range of detectable frequencies inherent to each detector, and a ratio is taken ofmore » the resulting value for two detectors. The DQE for the 25µm detector was obtained from a simulation of a proposed a-Se detector using an exposure of 200µR for a 50keV x-ray beam. This a-Se detector was compared to two microangiographic fluoroscope (MAF) detectors [the MAF-CCD with pixel size of 35µm and Nyquist frequency of 14.2 cycles/mm and the MAF-CMOS with pixel size of 75µm and Nyquist frequency of 6.6 cycles/mm] and a standard flat-panel detector (FPD with pixel size of 194µm and Nyquist frequency of 2.5cycles/mm). Results: ROD calculations indicated vastly superior performance by the a-Se detector in imaging small aluminum spheres. For the 50µm diameter sphere, the ROD values for the a-Se detector compared to the MAF-CCD, the MAF-CMOS, and the FPD were 7.3, 9.3 and 58, respectively. Detector performance in the low frequency regime was dictated by each detector’s DQE(0) value. Conclusion: The a-Se with CMOS readout is unique and appears to have distinctive advantages of incomparable high resolution, low noise, no readout lag, and expandable design. The a-Se direct detection system will be a powerful imaging tool in angiography, with potential break-through applications in diagnosis and treatment of neuro-vascular disease. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less
Modeling and Simulations for the High Flux Isotope Reactor Cycle 400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Chandler, David; Ade, Brian J
2015-03-01
A concerted effort over the past few years has been focused on enhancing the core model for the High Flux Isotope Reactor (HFIR), as part of a comprehensive study for HFIR conversion from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel. At this time, the core model used to perform analyses in support of HFIR operation is an MCNP model for the beginning of Cycle 400, which was documented in detail in a 2005 technical report. A HFIR core depletion model that is based on current state-of-the-art methods and nuclear data was needed to serve as reference for the designmore » of an LEU fuel for HFIR. The recent enhancements in modeling and simulations for HFIR that are discussed in the present report include: (1) revision of the 2005 MCNP model for the beginning of Cycle 400 to improve the modeling data and assumptions as necessary based on appropriate primary reference sources HFIR drawings and reports; (2) improvement of the fuel region model, including an explicit representation for the involute fuel plate geometry that is characteristic to HFIR fuel; and (3) revision of the Monte Carlo-based depletion model for HFIR in use since 2009 but never documented in detail, with the development of a new depletion model for the HFIR explicit fuel plate representation. The new HFIR models for Cycle 400 are used to determine various metrics of relevance to reactor performance and safety assessments. The calculated metrics are compared, where possible, with measurement data from preconstruction critical experiments at HFIR, data included in the current HFIR safety analysis report, and/or data from previous calculations performed with different methods or codes. The results of the analyses show that the models presented in this report provide a robust and reliable basis for HFIR analyses.« less
Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; Liu, Jianhong; Mou, Minjie
2013-01-01
Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this study, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptake (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. This methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data. PMID:24386441
Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; ...
2013-12-27
Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this paper, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptakemore » (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. In conclusion, this methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data.« less
Estimating walking and bicycling at the state level.
DOT National Transportation Integrated Search
2017-03-01
Estimates of vehicle miles traveled (VMT) drive policy and planning decisions for surface transportation. No similar : metric is computed for cycling and walking. What approaches could be used to compute such a metric on the state : level? This repor...
ExaSAT: An exascale co-design tool for performance modeling
Unat, Didem; Chan, Cy; Zhang, Weiqun; ...
2015-02-09
One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less
An Integrated Approach to Life Cycle Analysis
NASA Technical Reports Server (NTRS)
Chytka, T. M.; Brown, R. W.; Shih, A. T.; Reeves, J. D.; Dempsey, J. A.
2006-01-01
Life Cycle Analysis (LCA) is the evaluation of the impacts that design decisions have on a system and provides a framework for identifying and evaluating design benefits and burdens associated with the life cycles of space transportation systems from a "cradle-to-grave" approach. Sometimes called life cycle assessment, life cycle approach, or "cradle to grave analysis", it represents a rapidly emerging family of tools and techniques designed to be a decision support methodology and aid in the development of sustainable systems. The implementation of a Life Cycle Analysis can vary and may take many forms; from global system-level uncertainty-centered analysis to the assessment of individualized discriminatory metrics. This paper will focus on a proven LCA methodology developed by the Systems Analysis and Concepts Directorate (SACD) at NASA Langley Research Center to quantify and assess key LCA discriminatory metrics, in particular affordability, reliability, maintainability, and operability. This paper will address issues inherent in Life Cycle Analysis including direct impacts, such as system development cost and crew safety, as well as indirect impacts, which often take the form of coupled metrics (i.e., the cost of system unreliability). Since LCA deals with the analysis of space vehicle system conceptual designs, it is imperative to stress that the goal of LCA is not to arrive at the answer but, rather, to provide important inputs to a broader strategic planning process, allowing the managers to make risk-informed decisions, and increase the likelihood of meeting mission success criteria.
Pielke, Roger A; Marland, Gregg; Betts, Richard A; Chase, Thomas N; Eastman, Joseph L; Niles, John O; Niyogi, Dev Dutta S; Running, Steven W
2002-08-15
Our paper documents that land-use change impacts regional and global climate through the surface-energy budget, as well as through the carbon cycle. The surface-energy budget effects may be more important than the carbon-cycle effects. However, land-use impacts on climate cannot be adequately quantified with the usual metric of 'global warming potential'. A new metric is needed to quantify the human disturbance of the Earth's surface-energy budget. This 'regional climate change potential' could offer a new metric for developing a more inclusive climate protocol. This concept would also implicitly provide a mechanism to monitor potential local-scale environmental changes that could influence biodiversity.
SUSTAINABILITY METRICS AND LCIA RESEARCH WITHIN ORD AND AROUND THE WORLD
Sustainability metrics have received much attention, but not much consensus in approach. The United Nations Environment Programme (UNEP)/Society of Environmental Toxicology and Chemistry (SETAC) Life Cycle Initiative is designed to provide recommendations about the direction of ...
Development of a frequency regulation duty-cycle for standardized energy storage performance testing
Rosewater, David; Ferreira, Summer
2016-05-25
The US DOE Protocol for uniformly measuring and expressing the performance of energy storage systems, first developed in 2012 through inclusive working group activities, provides standardized methodologies for evaluating an energy storage system’s ability to supply specific services to electrical grids. This article elaborates on the data and decisions behind the duty-cycle used for frequency regulation in this protocol. Analysis of a year of publicly available frequency regulation control signal data from a utility was considered in developing the representative signal for this use case. Moreover, this showed that signal standard deviation can be used as a metric for aggressivenessmore » or rigor. From these data, we select representative 2 h long signals that exhibit nearly all of dynamics of actual usage under two distinct regimens, one for average use and the other for highly aggressive use. Our results were combined into a 24-h duty-cycle comprised of average and aggressive segments. The benefits and drawbacks of the selected duty-cycle are discussed along with its potential implications to the energy storage industry.« less
Environmental performance of green building code and certification systems.
Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua
2014-01-01
We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).
Evaluation of motion artifact metrics for coronary CT angiography.
Ma, Hongfeng; Gros, Eric; Szabo, Aniko; Baginski, Scott G; Laste, Zachary R; Kulkarni, Naveen M; Okerlund, Darin; Schmidt, Taly G
2018-02-01
This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best-phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground-truth motion artifact scores from a series of pairwise comparisons. Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low-Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine-filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground-truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground-truth reader score. The Kendall's Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. On phantom images, the Kendall's Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall's Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall's Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall's Tau coefficient of 0.65. The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images. © 2017 American Association of Physicists in Medicine.
Translation from UML to Markov Model: A Performance Modeling Framework
NASA Astrophysics Data System (ADS)
Khan, Razib Hayat; Heegaard, Poul E.
Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.
Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...
Software Quality Assurance Metrics
NASA Technical Reports Server (NTRS)
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
Peters, Glen P; Aamaas, Borgar; T Lund, Marianne; Solli, Christian; Fuglestvedt, Jan S
2011-10-15
The Life Cycle Assessment (LCA) impact category "global warming" compares emissions of long-lived greenhouse gases (LLGHGs) using Global Warming Potential (GWP) with a 100-year time-horizon as specified in the Kyoto Protocol. Two weaknesses of this approach are (1) the exclusion of short-lived climate forcers (SLCFs) and biophysical factors despite their established importance, and (2) the use of a particular emission metric (GWP) with a choice of specific time-horizons (20, 100, and 500 years). The GWP and the three time-horizons were based on an illustrative example with value judgments and vague interpretations. Here we illustrate, using LCA data of the transportation sector, the importance of SLCFs relative to LLGHGs, different emission metrics, and different treatments of time. We find that both the inclusion of SLCFs and the choice of emission metric can alter results and thereby change mitigation priorities. The explicit inclusion of time, both for emissions and impacts, can remove value-laden assumptions and provide additional information for impact assessments. We believe that our results show that a debate is needed in the LCA community on the impact category "global warming" covering which emissions to include, the emission metric(s) to use, and the treatment of time.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
Xue, Xiaobo; Schoen, Mary E; Ma, Xin Cissy; Hawkins, Troy R; Ashbolt, Nicholas J; Cashdollar, Jennifer; Garland, Jay
2015-06-15
Planning for sustainable community water systems requires a comprehensive understanding and assessment of the integrated source-drinking-wastewater systems over their life-cycles. Although traditional life cycle assessment and similar tools (e.g. footprints and emergy) have been applied to elements of these water services (i.e. water resources, drinking water, stormwater or wastewater treatment alone), we argue for the importance of developing and combining the system-based tools and metrics in order to holistically evaluate the complete water service system based on the concept of integrated resource management. We analyzed the strengths and weaknesses of key system-based tools and metrics, and discuss future directions to identify more sustainable municipal water services. Such efforts may include the need for novel metrics that address system adaptability to future changes and infrastructure robustness. Caution is also necessary when coupling fundamentally different tools so to avoid misunderstanding and consequently misleading decision-making. Published by Elsevier Ltd.
A review of training research and virtual reality simulators for the da Vinci surgical system.
Liu, May; Curet, Myriam
2015-01-01
PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.
Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability
Life cycle approaches are critical for identifying and managing to reduce burdens in the sustainability of product systems. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA) methods fail to integrate the multiple im...
Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu
2013-12-17
Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.
Integrated sustainability metrics provide an enriched set of information to inform decision-making. However, such approaches are rarely used to assess product supply chains. In this work, four integrated metrics—presented in terms of land, resources, value added, and stability—ar...
Development of an agility assessment module for preliminary fighter design
NASA Technical Reports Server (NTRS)
Ngan, Angelen; Bauer, Brent; Biezad, Daniel; Hahn, Andrew
1996-01-01
A FORTRAN computer program is presented to perform agility analysis on fighter aircraft configurations. This code is one of the modules of the NASA Ames ACSYNT (AirCraft SYNThesis) design code. The background of the agility research in the aircraft industry and a survey of a few agility metrics are discussed. The methodology, techniques, and models developed for the code are presented. FORTRAN programs were developed for two specific metrics, CCT (Combat Cycle Time) and PM (Pointing Margin), as part of the agility module. The validity of the code was evaluated by comparing with existing flight test data. Example trade studies using the agility module along with ACSYNT were conducted using Northrop F-20 Tigershark and McDonnell Douglas F/A-18 Hornet aircraft models. The sensitivity of thrust loading and wing loading on agility criteria were investigated. The module can compare the agility potential between different configurations and has the capability to optimize agility performance in the preliminary design process. This research provides a new and useful design tool for analyzing fighter performance during air combat engagements.
2006-07-01
parameters such as motion (e.g., Meitzler, Kistner et al ., 1998), multiple observers (Rotman, 1989), scene obscurants (Rotman, Gordan, & Kowalczyk...1989), clutter (Tidhar et al ., 1994), and multiple targets (Rotman, Gordan, & Kowalczyk, 1989) and selective visual attention2. As such, it is...resolvable cycles, N, of a bar pattern (i.e., a square wave) on a target (Johnson, 1958), or complexity (e.g., Tidhar et al ., 1994). Such metrics
Beshr, Mohamed; Aute, Vikrant; Abdelaziz, Omar; ...
2016-08-24
Refrigeration and air conditioning systems have high, negative environmental impacts due to refrigerant charge leaks from the system and their corresponding high global warming potential. Thus, many efforts are in progress to obtain suitable low GWP alternative refrigerants and more environmentally friendly systems for the future. In addition, the system’s life cycle climate performance (LCCP) is a widespread metric proposed for the evaluation of the system’s environmental impact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beshr, Mohamed; Aute, Vikrant; Abdelaziz, Omar
Refrigeration and air conditioning systems have high, negative environmental impacts due to refrigerant charge leaks from the system and their corresponding high global warming potential. Thus, many efforts are in progress to obtain suitable low GWP alternative refrigerants and more environmentally friendly systems for the future. In addition, the system’s life cycle climate performance (LCCP) is a widespread metric proposed for the evaluation of the system’s environmental impact.
Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A
2015-11-01
Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.
Online kinematic regulation by visual feedback for grasp versus transport during reach-to-pinch
Nataraj, Raviraj; Pasluosta, Cristian; Li, Zong-Ming
2014-01-01
Purpose This study investigated novel kinematic performance parameters to understand regulation by visual feedback (VF) of the reaching hand on the grasp and transport components during the reach-to-pinch maneuver. Conventional metrics often signify discrete movement features to postulate sensory-based control effects (e.g., time for maximum velocity to signify feedback delay). The presented metrics of this study were devised to characterize relative vision-based control of the sub-movements across the entire maneuver. Methods Movement performance was assessed according to reduced variability and increased efficiency of kinematic trajectories. Variability was calculated as the standard deviation about the observed mean trajectory for a given subject and VF condition across kinematic derivatives for sub-movements of inter-pad grasp (distance between thumb and index finger-pads; relative orientation of finger-pads) and transport (distance traversed by wrist). A Markov analysis then examined the probabilistic effect of VF on which movement component exhibited higher variability over phases of the complete maneuver. Jerk-based metrics of smoothness (minimal jerk) and energy (integrated jerk-squared) were applied to indicate total movement efficiency with VF. Results/Discussion The reductions in grasp variability metrics with VF were significantly greater (p<0.05) compared to transport for velocity, acceleration, and jerk, suggesting separate control pathways for each component. The Markov analysis indicated that VF preferentially regulates grasp over transport when continuous control is modeled probabilistically during the movement. Efficiency measures demonstrated VF to be more integral for early motor planning of grasp than transport in producing greater increases in smoothness and trajectory adjustments (i.e., jerk-energy) early compared to late in the movement cycle. Conclusions These findings demonstrate the greater regulation by VF on kinematic performance of grasp compared to transport and how particular features of this relativistic control occur continually over the maneuver. Utilizing the advanced performance metrics presented in this study facilitated characterization of VF effects continuously across the entire movement in corroborating the notion of separate control pathways for each component. PMID:24968371
Holistic energy system modeling combining multi-objective optimization and life cycle assessment
NASA Astrophysics Data System (ADS)
Rauner, Sebastian; Budzinski, Maik
2017-12-01
Making the global energy system more sustainable has emerged as a major societal concern and policy objective. This transition comes with various challenges and opportunities for a sustainable evolution affecting most of the UN’s Sustainable Development Goals. We therefore propose broadening the current metrics for sustainability in the energy system modeling field by using industrial ecology techniques to account for a conclusive set of indicators. This is pursued by including a life cycle based sustainability assessment into an energy system model considering all relevant products and processes of the global supply chain. We identify three pronounced features: (i) the low-hanging fruit of impact mitigation requiring manageable economic effort; (ii) embodied emissions of renewables cause increasing spatial redistribution of impact from direct emissions, the place of burning fuel, to indirect emissions, the location of the energy infrastructure production; (iii) certain impact categories, in which more overall sustainable systems perform worse than the cost minimal system, require a closer look. In essence, this study makes the case for future energy system modeling to include the increasingly important global supply chain and broaden the metrics of sustainability further than cost and climate change relevant emissions.
Relevance of emissions timing in biofuel greenhouse gases and climate impacts.
Schwietzke, Stefan; Griffin, W Michael; Matthews, H Scott
2011-10-01
Employing life cycle greenhouse gas (GHG) emissions as a key performance metric in energy and environmental policy may underestimate actual climate change impacts. Emissions released early in the life cycle cause greater cumulative radiative forcing (CRF) over the next decades than later emissions. Some indicate that ignoring emissions timing in traditional biofuel GHG accounting overestimates the effectiveness of policies supporting corn ethanol by 10-90% due to early land use change (LUC) induced GHGs. We use an IPCC climate model to (1) estimate absolute CRF from U.S. corn ethanol and (2) quantify an emissions timing factor (ETF), which is masked in the traditional GHG accounting. In contrast to earlier analyses, ETF is only 2% (5%) over 100 (50) years of impacts. Emissions uncertainty itself (LUC, fuel production period) is 1-2 orders of magnitude higher, which dwarfs the timing effect. From a GHG accounting perspective, emissions timing adds little to our understanding of the climate impacts of biofuels. However, policy makers should recognize that ETF could significantly decrease corn ethanol's probability of meeting the 20% GHG reduction target in the 2007 Energy Independence and Security Act. The added uncertainty of potentially employing more complex emissions metrics is yet to be quantified.
Geothermal Life Cycle Calculator
Sullivan, John
2014-03-11
This calculator is a handy tool for interested parties to estimate two key life cycle metrics, fossil energy consumption (Etot) and greenhouse gas emission (ghgtot) ratios, for geothermal electric power production. It is based solely on data developed by Argonne National Laboratory for DOE’s Geothermal Technologies office. The calculator permits the user to explore the impact of a range of key geothermal power production parameters, including plant capacity, lifetime, capacity factor, geothermal technology, well numbers and depths, field exploration, and others on the two metrics just mentioned. Estimates of variations in the results are also available to the user.
Work Loop and Ashby Charts of Active Materials
2013-10-17
constructed to show performance metrics (e.g., actuation stress, actuation strain, self - healing ) of iron-loaded compositions compared to other active...24,000 cycles at 80 Hz without change in strain characteristics. Self - healing of Magpol prepared using ferrite nanoparticles of different Curie...silicone) was selected as the polymer matrix due to its good flexibility and reasonable environmental stability. Self healing Magpol was synthesized by
Models and metrics for software management and engineering
NASA Technical Reports Server (NTRS)
Basili, V. R.
1988-01-01
This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mäkelä, P.; Akiyama, S.; Xie, H.
2015-06-10
We studied the coronal mass ejection (CME) height at the onset of 59 metric type II radio bursts associated with major solar energetic particle (SEP) events, excluding ground level enhancements (GLEs), during solar cycles 23 and 24. We calculated CME heights using a simple flare-onset method used by Gopalswamy et al. to estimate CME heights at the metric type II onset for cycle 23 GLEs. We found the mean CME height for non-GLE events (1.72 R{sub ☉}) to be ∼12% greater than that (1.53 R{sub ☉}) for cycle 23 GLEs. The difference could be caused by more impulsive acceleration ofmore » the GLE-associated CMEs. For cycle 24 non-GLE events, we compared the CME heights obtained using the flare-onset method and the three-dimensional spherical-shock fitting method and found the correlation to be good (CC = 0.68). We found the mean CME height for cycle 23 non-GLE events (1.79 R{sub ☉}) to be greater than that for cycle 24 non-GLE events (1.58 R{sub ☉}), but statistical tests do not definitely reject the possibility of coincidence. We suggest that the lower formation height of the shocks during cycle 24 indicates a change in the Alfvén speed profile because solar magnetic fields are weaker and plasma density levels are closer to the surface than usual during cycle 24. We also found that complex type III bursts showing diminution of type III emission in the 7–14 MHz frequency range are more likely associated with events with a CME height at the type II onset above 2 R{sub ☉}, supporting suggestions that the CME/shock structure causes the feature.« less
Testing, Requirements, and Metrics
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William
1998-01-01
The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.
Radiology operations: what you don't know could be costing you millions.
Joffe, Sam; Drew, Donna; Bansal, Manju; Hase, Michael
2007-01-01
Rapid growth in advanced imaging procedures has left hospital radiology departments struggling to keep up with demand, resulting in loss of patients to facilities that can offer service more quickly. While the departments appear to be working at full capacity, an operational analysis of over 400 hospital radiology departments in the US by GE Healthcare has determined that, paradoxically, many departments are in fact underutilized and operating for below their potential capacity. While CT cycle time in hospitals that were studied averaged 35 minutes, top performing hospitals operated the same equipment at a cycle time of 15 minutes, yielding approximately double the throughput volume. Factors leading to suboptimal performance include accounting metrics that mask true performance, leadership focus on capital investment rather than operations, under staffing, under scheduling, poorly aligned incentives, a fragmented view of operations, lack of awareness of latent opportunities, and lack of sufficient skills and processes to implement improvements. The study showed how modest investments in radiology operations can dramatically improve access to services and profitability.
Sensitivity Analysis and Optimization of the Nuclear Fuel Cycle: A Systematic Approach
NASA Astrophysics Data System (ADS)
Passerini, Stefano
For decades, nuclear energy development was based on the expectation that recycling of the fissionable materials in the used fuel from today's light water reactors into advanced (fast) reactors would be implemented as soon as technically feasible in order to extend the nuclear fuel resources. More recently, arguments have been made for deployment of fast reactors in order to reduce the amount of higher actinides, hence the longevity of radioactivity, in the materials destined to a geologic repository. The cost of the fast reactors, together with concerns about the proliferation of the technology of extraction of plutonium from used LWR fuel as well as the large investments in construction of reprocessing facilities have been the basis for arguments to defer the introduction of recycling technologies in many countries including the US. In this thesis, the impacts of alternative reactor technologies on the fuel cycle are assessed. Additionally, metrics to characterize the fuel cycles and systematic approaches to using them to optimize the fuel cycle are presented. The fuel cycle options of the 2010 MIT fuel cycle study are re-examined in light of the expected slower rate of growth in nuclear energy today, using the CAFCA (Code for Advanced Fuel Cycle Analysis). The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycle options available in the future. The options include limited recycling in L WRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. Additional fuel cycle scenarios presented for the first time in this work assume the deployment of innovative recycling reactor technologies such as the Reduced Moderation Boiling Water Reactors and Uranium-235 initiated Fast Reactors. A sensitivity study focused on system and technology parameters of interest has been conducted to test the robustness of the conclusions presented in the MIT Fuel Cycle Study. These conclusions are found to still hold, even when considering alternative technologies and different sets of simulation assumptions. Additionally, a first of a kind optimization scheme for the nuclear fuel cycle analysis is proposed and the applications of such an optimization are discussed. Optimization metrics of interest for different stakeholders in the fuel cycle (economics, fuel resource utilization, high level waste, transuranics/proliferation management, and environmental impact) are utilized for two different optimization techniques: a linear one and a stochastic one. Stakeholder elicitation provided sets of relative weights for the identified metrics appropriate to each stakeholder group, which were then successfully used to arrive at optimum fuel cycle configurations for recycling technologies. The stochastic optimization tool, based on a genetic algorithm, was used to identify non-inferior solutions according to Pareto's dominance approach to optimization. The main tradeoff for fuel cycle optimization was found to be between economics and most of the other identified metrics. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
How jet lag impairs Major League Baseball performance.
Song, Alex; Severini, Thomas; Allada, Ravi
2017-02-07
Laboratory studies have demonstrated that circadian clocks align physiology and behavior to 24-h environmental cycles. Examination of athletic performance has been used to discern the functions of these clocks in humans outside of controlled settings. Here, we examined the effects of jet lag, that is, travel that shifts the alignment of 24-h environmental cycles relative to the endogenous circadian clock, on specific performance metrics in Major League Baseball. Accounting for potential differences in home and away performance, travel direction, and team confounding variables, we observed that jet-lag effects were largely evident after eastward travel with very limited effects after westward travel, consistent with the >24-h period length of the human circadian clock. Surprisingly, we found that jet lag impaired major parameters of home-team offensive performance, for example, slugging percentage, but did not similarly affect away-team offensive performance. On the other hand, jet lag impacted both home and away defensive performance. Remarkably, the vast majority of these effects for both home and away teams could be explained by a single measure, home runs allowed. Rather than uniform effects, these results reveal surprisingly specific effects of circadian misalignment on athletic performance under natural conditions.
How jet lag impairs Major League Baseball performance
Song, Alex; Severini, Thomas; Allada, Ravi
2017-01-01
Laboratory studies have demonstrated that circadian clocks align physiology and behavior to 24-h environmental cycles. Examination of athletic performance has been used to discern the functions of these clocks in humans outside of controlled settings. Here, we examined the effects of jet lag, that is, travel that shifts the alignment of 24-h environmental cycles relative to the endogenous circadian clock, on specific performance metrics in Major League Baseball. Accounting for potential differences in home and away performance, travel direction, and team confounding variables, we observed that jet-lag effects were largely evident after eastward travel with very limited effects after westward travel, consistent with the >24-h period length of the human circadian clock. Surprisingly, we found that jet lag impaired major parameters of home-team offensive performance, for example, slugging percentage, but did not similarly affect away-team offensive performance. On the other hand, jet lag impacted both home and away defensive performance. Remarkably, the vast majority of these effects for both home and away teams could be explained by a single measure, home runs allowed. Rather than uniform effects, these results reveal surprisingly specific effects of circadian misalignment on athletic performance under natural conditions. PMID:28115724
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huff, Kathryn D.
Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less
NASA Technical Reports Server (NTRS)
Lee, P. J.
1985-01-01
For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.
Effects of subsampling of passive acoustic recordings on acoustic metrics.
Thomisch, Karolin; Boebel, Olaf; Zitterbart, Daniel P; Samaran, Flore; Van Parijs, Sofie; Van Opzeeland, Ilse
2015-07-01
Passive acoustic monitoring is an important tool in marine mammal studies. However, logistics and finances frequently constrain the number and servicing schedules of acoustic recorders, requiring a trade-off between deployment periods and sampling continuity, i.e., the implementation of a subsampling scheme. Optimizing such schemes to each project's specific research questions is desirable. This study investigates the impact of subsampling on the accuracy of two common metrics, acoustic presence and call rate, for different vocalization patterns (regimes) of baleen whales: (1) variable vocal activity, (2) vocalizations organized in song bouts, and (3) vocal activity with diel patterns. To this end, above metrics are compared for continuous and subsampled data subject to different sampling strategies, covering duty cycles between 50% and 2%. The results show that a reduction of the duty cycle impacts negatively on the accuracy of both acoustic presence and call rate estimates. For a given duty cycle, frequent short listening periods improve accuracy of daily acoustic presence estimates over few long listening periods. Overall, subsampling effects are most pronounced for low and/or temporally clustered vocal activity. These findings illustrate the importance of informed decisions when applying subsampling strategies to passive acoustic recordings or analyses for a given target species.
The contemporary cement cycle of the United States
Kapur, A.; Van Oss, H. G.; Keoleian, G.; Kesler, S.E.; Kendall, A.
2009-01-01
A country-level stock and flow model for cement, an important construction material, was developed based on a material flow analysis framework. Using this model, the contemporary cement cycle of the United States was constructed by analyzing production, import, and export data for different stages of the cement cycle. The United States currently supplies approximately 80% of its cement consumption through domestic production and the rest is imported. The average annual net addition of in-use new cement stock over the period 2000-2004 was approximately 83 million metric tons and amounts to 2.3 tons per capita of concrete. Nonfuel carbon dioxide emissions (42 million metric tons per year) from the calcination phase of cement manufacture account for 62% of the total 68 million tons per year of cement production residues. The end-of-life cement discards are estimated to be 33 million metric tons per year, of which between 30% and 80% is recycled. A significant portion of the infrastructure in the United States is reaching the end of its useful life and will need to be replaced or rehabilitated; this could require far more cement than might be expected from economic forecasts of demand for cement. ?? 2009 Springer Japan.
Samani, Afshin; Srinivasan, Divya; Mathiassen, Svend Erik; Madeleine, Pascal
2017-02-01
The spatio-temporal distribution of muscle activity has been suggested to be a determinant of fatigue development. Pursuing this hypothesis, we investigated the pattern of muscular activity in the shoulder and arm during a repetitive dynamic task performed until participants' rating of perceived exertion reached 8 on Borg's CR-10 scale. We collected high-density surface electromyogram (HD-EMG) over the upper trapezius, as well as bipolar EMG from biceps brachii, triceps brachii, deltoideus anterior, serratus anterior, upper and lower trapezius from 21 healthy women. Root-mean-square (RMS) and mean power frequency (MNF) were calculated for all EMG signals. The barycenter of RMS values over the HD-EMG grid was also determined, as well as normalized mutual information (NMI) for each pair of muscles. Cycle-to-cycle variability of these metrics was also assessed. With time, EMG RMS increased for most of the muscles, and MNF decreased. Trapezius activity became higher on the lateral side than on the medial side of the HD-EMG grid and the barycenter moved in a lateral direction. NMI between muscle pairs increased with time while its variability decreased. The variability of the metrics during the initial 10 % of task performance was not associated with the time to task termination. Our results suggest that the considerable variability in force and posture contained in the dynamic task per se masks any possible effects of differences between subjects in initial motor variability on the rate of fatigue development.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Determination of Duty Cycle for Energy Storage Systems in a PV Smoothing Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoenwald, David A.; Ellison, James
This report supplements the document, "Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems," issued in a revised version in April 2016 (see [4]), which will include the photovoltaic (PV) smoothing application for an energy storage system (ESS). This report provides the background and documentation associated with the determination of a duty cycle for an ESS operated in a PV smoothing application for the purpose of measuring and expressing ESS performance in accordance with the ESS performance protocol. ACKNOWLEDGEMENTS The authors gratefully acknowledge the support of Dr. Imre Gyuk, program manager for the DOE Energy Storage Systemsmore » Program. The authors would also like to express their appreciation to all the stakeholders who participated as members of the PV Smoothing Subgroup. Without their thoughtful input and recommendations, the definitions, metrics, and duty cycle provided in this report would not have been possible. A complete listing of members of the PV Smoothing Subgroup appears in the first chapter of this report. Special recognition should go to the staffs at Pacific Northwest National Laboratory (PNNL) and Sandia National Laboratories (SNL) in collaborating on this effort. In particular, Mr. David Conover and Dr. Vish Viswanathan of PNNL and Dr. Summer Ferreira of SNL were especially helpful in their suggestions for the determination of a duty cycle for the PV Smoothing application.« less
Improving early cycle economic evaluation of diagnostic technologies.
Steuten, Lotte M G; Ramsey, Scott D
2014-08-01
The rapidly increasing range and expense of new diagnostics, compels consideration of a different, more proactive approach to health economic evaluation of diagnostic technologies. Early cycle economic evaluation is a decision analytic approach to evaluate technologies in development so as to increase the return on investment as well as patient and societal impact. This paper describes examples of 'early cycle economic evaluations' as applied to diagnostic technologies and highlights challenges in its real-time application. It shows that especially in the field of diagnostics, with rapid technological developments and a changing regulatory climate, early cycle economic evaluation can have a guiding role to improve the efficiency of the diagnostics innovation process. In the next five years the attention will move beyond the methodological and analytic challenges of early cycle economic evaluation towards the challenge of effectively applying it to improve diagnostic research and development and patient value. Future work in this area should therefore be 'strong on principles and soft on metrics', that is, the metrics that resonate most clearly with the various decision makers in this field.
Telescoping Solar Array Concept for Achieving High Packaging Efficiency
NASA Technical Reports Server (NTRS)
Mikulas, Martin; Pappa, Richard; Warren, Jay; Rose, Geoff
2015-01-01
Lightweight, high-efficiency solar arrays are required for future deep space missions using high-power Solar Electric Propulsion (SEP). Structural performance metrics for state-of-the art 30-50 kW flexible blanket arrays recently demonstrated in ground tests are approximately 40 kW/cu m packaging efficiency, 150 W/kg specific power, 0.1 Hz deployed stiffness, and 0.2 g deployed strength. Much larger arrays with up to a megawatt or more of power and improved packaging and specific power are of interest to mission planners for minimizing launch and life cycle costs of Mars exploration. A new concept referred to as the Compact Telescoping Array (CTA) with 60 kW/cu m packaging efficiency at 1 MW of power is described herein. Performance metrics as a function of array size and corresponding power level are derived analytically and validated by finite element analysis. Feasible CTA packaging and deployment approaches are also described. The CTA was developed, in part, to serve as a NASA reference solar array concept against which other proposed designs of 50-1000 kW arrays for future high-power SEP missions could be compared.
Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.
Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P
2018-05-15
The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.
Modelling cephalopod-inspired pulsed-jet locomotion for underwater soft robots.
Renda, F; Giorgio-Serchi, F; Boyer, F; Laschi, C
2015-09-28
Cephalopods (i.e., octopuses and squids) are being looked upon as a source of inspiration for the development of unmanned underwater vehicles. One kind of cephalopod-inspired soft-bodied vehicle developed by the authors entails a hollow, elastic shell capable of performing a routine of recursive ingestion and expulsion of discrete slugs of fluids which enable the vehicle to propel itself in water. The vehicle performances were found to depend largely on the elastic response of the shell to the actuation cycle, thus motivating the development of a coupled propulsion-elastodynamics model of such vehicles. The model is developed and validated against a set of experimental results performed with the existing cephalopod-inspired prototypes. A metric of the efficiency of the propulsion routine which accounts for the elastic energy contribution during the ingestion/expulsion phases of the actuation is formulated. Demonstration on the use of this model to estimate the efficiency of the propulsion routine for various pulsation frequencies and for different morphologies of the vehicles are provided. This metric of efficiency, employed in association with the present elastodynamics model, provides a useful tool for performing a priori energetic analysis which encompass both the design specifications and the actuation pattern of this new kind of underwater vehicle.
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Kaemming, Thomas A.
2012-01-01
A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.
Planning for sustainable community water systems requires a comprehensive understanding and assessment of the integrated source-drinking-wastewater systems over their life-cycles. Although traditional life cycle assessment and similar tools (e.g. footprints and emergy) have been ...
NASA Astrophysics Data System (ADS)
Chivukula, V. Keshav; McGah, Patrick; Prisco, Anthony; Beckman, Jennifer; Mokadam, Nanush; Mahr, Claudius; Aliseda, Alberto
2016-11-01
Flow in the aortic vasculature may impact stroke risk in patients with left ventricular assist devices (LVAD) due to severely altered hemodynamics. Patient-specific 3D models of the aortic arch and great vessels were created with an LVAD outflow graft at 45, 60 and 90° from centerline of the ascending aorta, in order to understand the effect of surgical placement on hemodynamics and thrombotic risk. Intermittent aortic valve opening (once every five cardiac cycles) was simulated and the impact of this residual native output investigated for the potential to wash out stagnant flow in the aortic root region. Unsteady CFD simulations with patient-specific boundary conditions were performed. Particle tracking for 10 cardiac cycles was used to determine platelet residence times and shear stress histories. Thrombosis risk was assessed by a combination of Eulerian and Lagrangian metrics and a newly developed thrombogenic potential metric. Results show a strong influence of LVAD outflow graft angle on hemodynamics in the ascending aorta and consequently on stroke risk, with a highly positive impact of aortic valve opening, even at low frequencies. Optimization of LVAD implantation and management strategies based on patient-specific simulations to minimize stroke risk will be presented
Texture metric that predicts target detection performance
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.
2015-12-01
Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.
LIFE CYCLE IMPACT ASSESSMENT FOR THE BUILDING DESIGN AND CONSTRUCTION INDUSTRY
The most effective way to achieve long-term environmental results is through the use of a consistent set of metrics within a decision-making framework. This paper describes the role of Life Cycle Impact Assessment (LCIA) and details its use within two tools available to this indu...
Menaspà, Paolo; Abbiss, Chris R
2017-01-01
Over the past few decades the possibility to capture real-time data from road cyclists has drastically improved. Given the increasing pressure for improved transparency and openness, there has been an increase in publication of cyclists' physiological and performance data. Recently, it has been suggested that the use of such performance biometrics may be used to strengthen the sensitivity and applicability of the Athlete Biological Passport (ABP) and aid in the fight against doping. This is an interesting concept which has merit, although there are several important factors that need to be considered. These factors include accuracy of the data collected and validity (and reliability) of the subsequent performance modeling. In order to guarantee high quality standards, the implementation of well-structured Quality-Systems within sporting organizations should be considered, and external certifications may be required. Various modeling techniques have been developed, many of which are based on fundamental intensity/time relationships. These models have increased our understanding of performance but are currently limited in their application, for example due to the largely unaccounted effects of environmental factors such as, heat and altitude. In conclusion, in order to use power data as a performance biometric to be integrated in the biological passport, a number of actions must be taken to ensure accuracy of the data and better understand road cycling performance in the field. This article aims to outline considerations in the quantification of cycling performance, also presenting an alternative method (i.e., monitoring race results) to allow for determination of unusual performance improvements.
Menaspà, Paolo; Abbiss, Chris R.
2017-01-01
Over the past few decades the possibility to capture real-time data from road cyclists has drastically improved. Given the increasing pressure for improved transparency and openness, there has been an increase in publication of cyclists' physiological and performance data. Recently, it has been suggested that the use of such performance biometrics may be used to strengthen the sensitivity and applicability of the Athlete Biological Passport (ABP) and aid in the fight against doping. This is an interesting concept which has merit, although there are several important factors that need to be considered. These factors include accuracy of the data collected and validity (and reliability) of the subsequent performance modeling. In order to guarantee high quality standards, the implementation of well-structured Quality-Systems within sporting organizations should be considered, and external certifications may be required. Various modeling techniques have been developed, many of which are based on fundamental intensity/time relationships. These models have increased our understanding of performance but are currently limited in their application, for example due to the largely unaccounted effects of environmental factors such as, heat and altitude. In conclusion, in order to use power data as a performance biometric to be integrated in the biological passport, a number of actions must be taken to ensure accuracy of the data and better understand road cycling performance in the field. This article aims to outline considerations in the quantification of cycling performance, also presenting an alternative method (i.e., monitoring race results) to allow for determination of unusual performance improvements. PMID:29163232
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Logistics, Costs, and GHG Impacts of Utility-Scale Co-Firing with 20% Biomass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichol, Corrie Ian
This study analyzes the possibility that biopower in the U.S. is a cost-competitive option to significantly reduce greenhouse gas emissions. In 2009, net greenhouse gas (GHG) emitted in the United States was equivalent to 5,618 million metric tons CO 2, up 5.6% from 1990 (EPA 2011). Coal-fired power generation accounted for 1,748 million metric tons of this total. Intuitively, life-cycle CO 2 emissions in the power sector could be reduced by substituting renewable biomass for coal. If just 20% of the coal combusted in 2009 had been replaced with biomass, CO 2 emissions would have been reduced by 350 millionmore » metric tons, or about 6% of net annual GHG emission. This would have required approximately 225 million tons of dry biomass. Such an ambitious fuel substitution would require development of a biomass feedstock production and supply system tantamount to coal. This material would need to meet stringent specifications to ensure reliable conveyance to boiler burners, efficient combustion, and no adverse impact on heat transfer surfaces and flue gas cleanup operations. Therefore, this report addresses the potential cost/benefit tradeoffs of co-firing 20% specification-qualified biomass (on an energy content basis) in large U.S. coal-fired power plants. The dependence and sensitivity of feedstock cost on source of material, location, supply distance, and demand pressure was established. Subsequently, the dependence of levelized cost of electricity (LCOE) on feedstock costs, power plant feed system retrofit, and impact on boiler performance was determined. Overall life-cycle assessment (LCA) of greenhouse gas emissions saving were next evaluated and compared to wind and solar energy to benchmark the leading alternatives for meeting renewable portfolio standards (or RPS).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, Fujun; Jeudy, Jean; D’Souza, Warren
Purpose: To investigate the incorporation of pretherapy regional ventilation function in predicting radiation fibrosis (RF) in stage III nonsmall cell lung cancer (NSCLC) patients treated with concurrent thoracic chemoradiotherapy. Methods: Thirty-seven patients with stage III NSCLC were retrospectively studied. Patients received one cycle of cisplatin–gemcitabine, followed by two to three cycles of cisplatin–etoposide concurrently with involved-field thoracic radiotherapy (46–66 Gy; 2 Gy/fraction). Pretherapy regional ventilation images of the lung were derived from 4D computed tomography via a density change–based algorithm with mass correction. In addition to the conventional dose–volume metrics (V{sub 20}, V{sub 30}, V{sub 40}, and mean lung dose),more » dose–function metrics (fV{sub 20}, fV{sub 30}, fV{sub 40}, and functional mean lung dose) were generated by combining regional ventilation and radiation dose. A new class of metrics was derived and referred to as dose–subvolume metrics (sV{sub 20}, sV{sub 30}, sV{sub 40}, and subvolume mean lung dose); these were defined as the conventional dose–volume metrics computed on the functional lung. Area under the receiver operating characteristic curve (AUC) values and logistic regression analyses were used to evaluate these metrics in predicting hallmark characteristics of RF (lung consolidation, volume loss, and airway dilation). Results: AUC values for the dose–volume metrics in predicting lung consolidation, volume loss, and airway dilation were 0.65–0.69, 0.57–0.70, and 0.69–0.76, respectively. The respective ranges for dose–function metrics were 0.63–0.66, 0.61–0.71, and 0.72–0.80 and for dose–subvolume metrics were 0.50–0.65, 0.65–0.75, and 0.73–0.85. Using an AUC value = 0.70 as cutoff value suggested that at least one of each type of metrics (dose–volume, dose–function, dose–subvolume) was predictive for volume loss and airway dilation, whereas lung consolidation cannot be accurately predicted by any of the metrics. Logistic regression analyses showed that dose–function and dose–subvolume metrics were significant (P values ≤ 0.02) in predicting volume airway dilation. Likelihood ratio test showed that when combining dose–function and/or dose–subvolume metrics with dose–volume metrics, the achieved improvements of prediction accuracy on volume loss and airway dilation were significant (P values ≤ 0.04). Conclusions: The authors’ results demonstrated that the inclusion of regional ventilation function improved accuracy in predicting RF. In particular, dose–subvolume metrics provided a promising method for preventing radiation-induced pulmonary complications.« less
Metric-driven harm: an exploration of unintended consequences of performance measurement.
Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck
2013-11-01
Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.
Performance assessment in brain-computer interface-based augmentative and alternative communication
2013-01-01
A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems. PMID:23680020
Feng, Dawei; Lei, Ting; Lukatskaya, Maria R.; ...
2018-01-01
For miniaturized capacitive energy storage, volumetric and areal capacitances are more important metrics than gravimetric ones because of the constraints imposed by device volume and chip area. Typically used in commercial supercapacitors, porous carbons, although they provide a stable and reliable performance, lack volumetric performance because of their inherently low density and moderate capacitances. In this paper, we report a high-performing electrode based on conductive hexaaminobenzene (HAB)-derived two-dimensional metal-organic frameworks (MOFs). In addition to possessing a high packing density and hierarchical porous structure, these MOFs also exhibit excellent chemical stability in both acidic and basic aqueous solutions, which is inmore » sharp contrast to conventional MOFs. Submillimetre-thick pellets of HAB MOFs showed high volumetric capacitances up to 760 F cm -3 and high areal capacitances over 20 F cm -2. Furthermore, the HAB MOF electrodes exhibited highly reversible redox behaviours and good cycling stability with a capacitance retention of 90% after 12,000 cycles. In conclusion, these promising results demonstrate the potential of using redox-active conductive MOFs in energy-storage applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Dawei; Lei, Ting; Lukatskaya, Maria R.
For miniaturized capacitive energy storage, volumetric and areal capacitances are more important metrics than gravimetric ones because of the constraints imposed by device volume and chip area. Typically used in commercial supercapacitors, porous carbons, although they provide a stable and reliable performance, lack volumetric performance because of their inherently low density and moderate capacitances. In this paper, we report a high-performing electrode based on conductive hexaaminobenzene (HAB)-derived two-dimensional metal-organic frameworks (MOFs). In addition to possessing a high packing density and hierarchical porous structure, these MOFs also exhibit excellent chemical stability in both acidic and basic aqueous solutions, which is inmore » sharp contrast to conventional MOFs. Submillimetre-thick pellets of HAB MOFs showed high volumetric capacitances up to 760 F cm -3 and high areal capacitances over 20 F cm -2. Furthermore, the HAB MOF electrodes exhibited highly reversible redox behaviours and good cycling stability with a capacitance retention of 90% after 12,000 cycles. In conclusion, these promising results demonstrate the potential of using redox-active conductive MOFs in energy-storage applications.« less
NASA Astrophysics Data System (ADS)
Feng, Dawei; Lei, Ting; Lukatskaya, Maria R.; Park, Jihye; Huang, Zhehao; Lee, Minah; Shaw, Leo; Chen, Shucheng; Yakovenko, Andrey A.; Kulkarni, Ambarish; Xiao, Jianping; Fredrickson, Kurt; Tok, Jeffrey B.; Zou, Xiaodong; Cui, Yi; Bao, Zhenan
2018-01-01
For miniaturized capacitive energy storage, volumetric and areal capacitances are more important metrics than gravimetric ones because of the constraints imposed by device volume and chip area. Typically used in commercial supercapacitors, porous carbons, although they provide a stable and reliable performance, lack volumetric performance because of their inherently low density and moderate capacitances. Here we report a high-performing electrode based on conductive hexaaminobenzene (HAB)-derived two-dimensional metal-organic frameworks (MOFs). In addition to possessing a high packing density and hierarchical porous structure, these MOFs also exhibit excellent chemical stability in both acidic and basic aqueous solutions, which is in sharp contrast to conventional MOFs. Submillimetre-thick pellets of HAB MOFs showed high volumetric capacitances up to 760 F cm-3 and high areal capacitances over 20 F cm-2. Furthermore, the HAB MOF electrodes exhibited highly reversible redox behaviours and good cycling stability with a capacitance retention of 90% after 12,000 cycles. These promising results demonstrate the potential of using redox-active conductive MOFs in energy-storage applications.
Kazzazi, Arefeh; Bresser, Dominic; Birrozzi, Agnese; von Zamory, Jan; Hekmatfar, Maral; Passerini, Stefano
2018-05-23
Even though electrochemically inactive, the binding agent in lithium-ion electrodes substantially contributes to the performance metrics such as the achievable capacity, rate capability, and cycling stability. Herein, we present an in-depth comparative analysis of three different aqueous binding agents, allowing for the replacement of the toxic N-methyl-2-pyrrolidone as the processing solvent, for high-energy Li 1.2 Ni 0.16 Mn 0.56 Co 0.08 O 2 (Li-rich NMC or LR-NMC) as a potential next-generation cathode material. The impact of the binding agents, sodium carboxymethyl cellulose, sodium alginate, and commercial TRD202A (TRD), and the related chemical reactions occurring during the electrode coating process on the electrode morphology and cycling performance is investigated. In particular, the role of phosphoric acid in avoiding the aluminum current collector corrosion and stabilizing the LR-NMC/electrolyte interface as well as its chemical interaction with the binder is investigated, providing an explanation for the observed differences in the electrochemical performance.
NASA Astrophysics Data System (ADS)
Testi, D.; Schito, E.; Menchetti, E.; Grassi, W.
2014-11-01
Constructions built in Italy before 1945 (about 30% of the total built stock) feature low energy efficiency. Retrofit actions in this field can lead to valuable energetic and economic savings. In this work, we ran a dynamic simulation of a historical building of the University of Pisa during the heating season. We firstly evaluated the energy requirements of the building and the performance of the existing natural gas boiler, validated with past billings of natural gas. We also verified the energetic savings obtainable by the substitution of the boiler with an air-to-water electrically-driven modulating heat pump, simulated through a cycle-based model, evaluating the main economic metrics. The cycle-based model of the heat pump, validated with manufacturers' data available only at specified temperature and load conditions, can provide more accurate results than the simplified models adopted by current technical standards, thus increasing the effectiveness of energy audits.
NASA Astrophysics Data System (ADS)
Hardiman, B. S.; Atkins, J.; Dahlin, K.; Fahey, R. T.; Gough, C. M.
2016-12-01
Canopy physical structure - leaf quantity and arrangement - strongly affects light interception and distribution. As such, canopy physical structure is a key driver of forest carbon (C) dynamics. Terrestrial lidar systems (TLS) provide spatially explicit, quantitative characterizations of canopy physical structure at scales commensurate with plot-scale C cycling processes. As an example, previous TLS-based studies established that light use efficiency is positively correlated with canopy physical structure, influencing the trajectory of net primary production throughout forest development. Linking TLS measurements of canopy structure to multispectral satellite observations of forest canopies may enable scaling of ecosystem C cycling processes from leaves to continents. We will report on our study relating a suite of canopy structural metrics to well-established remotely sensed measurements (NDVI, EVI, albedo, tasseled cap indices, etc.) which are indicative of important forest characteristics (leaf area, canopy nitrogen, light interception, etc.). We used Landsat data, which provides observations at 30m resolution, a scale comparable to that of TLS. TLS data were acquired during 2009-2016 from forest sites throughout Eastern North America, comprised primarily of NEON and Ameriflux sites. Canopy physical structure data were compared with contemporaneous growing-season Landsat data. Metrics of canopy physical structure are expected to covary with forest composition and dominant PFT, likely influencing interaction strength between TLS and Landsat canopy metrics. More structurally complex canopies (those with more heterogeneous distributions of leaf area) are expected to have lower albedo, suggesting greater canopy light absorption (higher fAPAR) than simpler canopies. We expect that vegetation indices (NDVI, EVI) will increase with TLS metrics of spatial heterogeneity, and not simply quantity, of leaves, supporting our hypothesis that canopy light absorption is dependent on both leaf quantity and arrangement. Relating satellite observations of canopy properties to TLS metrics of canopy physical structure represents an important advance for modelling canopy energy balance and forest C cycling processes at large spatial scales.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Performance metrics for the evaluation of hyperspectral chemical identification systems
NASA Astrophysics Data System (ADS)
Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay
2016-02-01
Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.
Best Practices Handbook: Traffic Engineering in Range Networks
2016-03-01
units of measurement. Measurement Methodology - A repeatable measurement technique used to derive one or more metrics of interest . Network...Performance measures - Metrics that provide quantitative or qualitative measures of the performance of systems or subsystems of interest . Performance Metric
Naturalistic drive cycle synthesis for pickup trucks.
Liu, Zifan; Ivanco, Andrej; Filipi, Zoran
2015-09-01
Future pick-up trucks are meeting much stricter fuel economy and exhaust emission standards. Design tradeoffs will have to be carefully evaluated to satisfy consumer expectations within the regulatory and cost constraints. Boundary conditions will obviously be critical for decision making: thus, the understanding of how customers are driving in naturalistic settings is indispensable. Federal driving schedules, while critical for certification, do not capture the richness of naturalistic cycles, particularly the aggressive maneuvers that often shape consumer perception of performance. While there are databases with large number of drive cycles, applying all of them directly in the design process is impractical. Therefore, representative drive cycles that capture the essence of the naturalistic driving should be synthesized from naturalistic driving data. Naturalistic drive cycles are firstly categorized by investigating their micro-trip components, defined as driving activities between successive stops. Micro-trips are expected to characterize underlying local traffic conditions, and separate different driving patterns. Next, the transitions from one vehicle state to another vehicle state in each cycle category are captured with Transition Probability Matrix (TPM). Candidate drive cycles can subsequently be synthesized using Markov Chain based on TPMs for each category. Finally, representative synthetic drive cycles are selected through assessment of significant cycle metrics to identify the ones with smallest errors. This paper provides a framework for synthesis of representative drive cycles from naturalistic driving data, which can subsequently be used for efficient optimization of design or control of pick-up truck powertrains. Manufacturers will benefit from representative drive cycles in several aspects, including quick assessments of vehicle performance and energy consumption in simulations, component sizing and design, optimization of control strategies, and vehicle testing under real-world conditions. This is in contrast to using federal certification test cycles, which were never intended to capture pickup truck segment. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
Conductive Polymer Binder-Enabled SiO–Sn xCo yC z Anode for High-Energy Lithium-Ion Batteries
Zhao, Hui; Fu, Yanbao; Ling, Min; ...
2016-05-10
In this paper, a SiOSnCoC composite anode is assembled using a conductive polymer binder for the application in next-generation high energy density lithium-ion batteries. A specific capacity of 700 mAh/g is achieved at a 1C (900 mA/g) rate. A high active material loading anode with an areal capacity of 3.5 mAh/cm 2 is demonstrated by mixing SiOSnCoC with graphite. To compensate for the lithium loss in the first cycle, stabilized lithium metal powder (SLMP) is used for prelithiation; when paired with a commercial cathode, a stable full cell cycling performance with a 86% first cycle efficiency is realized. Finally, bymore » achieving these important metrics toward a practical application, this conductive polymer binder/SiOSnCoC anode system presents great promise to enable the next generation of high-energy lithium-ion batteries.« less
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Burken, John J.; Larson, David; Johnson, Marcus
2014-01-01
Flight research has shown the effectiveness of adaptive flight controls for improving aircraft safety and performance in the presence of uncertainties. The National Aeronautics and Space Administration's (NASA)'s Integrated Resilient Aircraft Control (IRAC) project designed and conducted a series of flight experiments to study the impact of variations in adaptive controller design complexity on performance and handling qualities. A novel complexity metric was devised to compare the degrees of simplicity achieved in three variations of a model reference adaptive controller (MRAC) for NASA's F-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Full-Scale Advanced Systems Testbed (Gen-2A) aircraft. The complexity measures of these controllers are also compared to that of an earlier MRAC design for NASA's Intelligent Flight Control System (IFCS) project and flown on a highly modified F-15 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois). Pilot comments during the IRAC research flights pointed to the importance of workload on handling qualities ratings for failure and damage scenarios. Modifications to existing pilot aggressiveness and duty cycle metrics are presented and applied to the IRAC controllers. Finally, while adaptive controllers may alleviate the effects of failures or damage on an aircraft's handling qualities, they also have the potential to introduce annoying changes to the flight dynamics or to the operation of aircraft systems. A nuisance rating scale is presented for the categorization of nuisance side-effects of adaptive controllers.
Sustainability measurement in economics involves evaluation of environmental and economic impact in an integrated manner. In this study, system level economic data are combined with environmental impact from a life cycle assessment (LCA) of a common product. We are exploring a co...
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, L.; Leung, L. R.; Lin, G.; Lu, J.; Gao, Y.; Zhang, Y.
2017-12-01
Projecting precipitation changes is challenging because of incomplete understanding of the climate system and biases and uncertainty in climate models. In East Asia where summer precipitation is dominantly influenced by the monsoon circulation and the global models from Coupled Model Intercomparison Project Phase 5 (CMIP5), however, give various projection of precipitation change for 21th century. It is critical for community to know which models' projection are more reliable in response to natural and anthropogenic forcings. In this study we defined multiple-dimensional metrics, measuring the model performance in simulating the present-day of large-scale circulation, regional precipitation and relationship between them. The large-scale circulation features examined in this study include the lower tropospheric southwesterly winds, the western North Pacific subtropical high, the South China Sea Subtropical High, and the East Asian westerly jet in the upper troposphere. Each of these circulation features transport moisture to East Asia, enhancing the moist static energy and strengthening the Meiyu moisture front that is the primary mechanism for precipitation generation in eastern China. Based on these metrics, 30 models in CMIP5 ensemble are classified into three groups. Models in the top performing group projected regional precipitation patterns that are more similar to each other than the bottom or middle performing group and consistently projected statistically significant increasing trends in two of the large-scale circulation indices and precipitation. In contrast, models in the bottom or middle performing group projected small drying or no trends in precipitation. We also find the models that only reasonably reproduce the observed precipitation climatology does not guarantee more reliable projection of future precipitation because good simulation skill could be achieved through compensating errors from multiple sources. Herein the potential for more robust projections of precipitation changes at regional scale is demonstrated through the use of discriminating metric to subsample the multi-model ensemble. The results from this study provides insights for how to select models from CMIP ensemble to project regional climate and hydrological cycle changes.
Method for Controlling Space Transportation System Life Cycle Costs
NASA Technical Reports Server (NTRS)
McCleskey, Carey M.; Bartine, David E.
2006-01-01
A structured, disciplined methodology is required to control major cost-influencing metrics of space transportation systems during design and continuing through the test and operations phases. This paper proposes controlling key space system design metrics that specifically influence life cycle costs. These are inclusive of flight and ground operations, test, and manufacturing and infrastructure. The proposed technique builds on today's configuration and mass properties control techniques and takes on all the characteristics of a classical control system. While the paper does not lay out a complete math model, key elements of the proposed methodology are explored and explained with both historical and contemporary examples. Finally, the paper encourages modular design approaches and technology investments compatible with the proposed method.
Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.
2010-01-01
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981
A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics
Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar
2017-01-01
This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744
An Introduction to Goodness of Fit for PMU Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riepnieks, Artis; Kirkham, Harold
2017-10-01
New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable evenmore » with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.« less
Performance regression manager for large scale systems
Faraj, Daniel A.
2017-10-17
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.
Zone calculation as a tool for assessing performance outcome in laparoscopic suturing.
Buckley, Christina E; Kavanagh, Dara O; Nugent, Emmeline; Ryan, Donncha; Traynor, Oscar J; Neary, Paul C
2015-06-01
Simulator performance is measured by metrics, which are valued as an objective way of assessing trainees. Certain procedures such as laparoscopic suturing, however, may not be suitable for assessment under traditionally formulated metrics. Our aim was to assess if our new metric is a valid method of assessing laparoscopic suturing. A software program was developed to order to create a new metric, which would calculate the percentage of time spent operating within pre-defined areas called "zones." Twenty-five candidates (medical students N = 10, surgical residents N = 10, and laparoscopic experts N = 5) performed the laparoscopic suturing task on the ProMIS III(®) simulator. New metrics of "in-zone" and "out-zone" scores as well as traditional metrics of time, path length, and smoothness were generated. Performance was also assessed by two blinded observers using the OSATS and FLS rating scales. This novel metric was evaluated by comparing it to both traditional metrics and subjective scores. There was a significant difference in the average in-zone and out-zone scores between all three experience groups (p < 0.05). The new zone metrics scores correlated significantly with the subjective-blinded observer scores of OSATS and FLS (p = 0.0001). The new zone metric scores also correlated significantly with the traditional metrics of path length, time, and smoothness (p < 0.05). The new metric is a valid tool for assessing laparoscopic suturing objectively. This could be incorporated into a competency-based curriculum to monitor resident progression in the simulated setting.
Process-oriented Observational Metrics for CMIP6 Climate Model Assessments
NASA Astrophysics Data System (ADS)
Jiang, J. H.; Su, H.
2016-12-01
Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
On Applying the Prognostic Performance Metrics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
NASA Astrophysics Data System (ADS)
Van Sundert, Kevin; Horemans, Joanna A.; Stendahl, Johan; Vicca, Sara
2018-06-01
The availability of nutrients is one of the factors that regulate terrestrial carbon cycling and modify ecosystem responses to environmental changes. Nonetheless, nutrient availability is often overlooked in climate-carbon cycle studies because it depends on the interplay of various soil factors that would ideally be comprised into metrics applicable at large spatial scales. Such metrics do not currently exist. Here, we use a Swedish forest inventory database that contains soil data and tree growth data for > 2500 forests across Sweden to (i) test which combination of soil factors best explains variation in tree growth, (ii) evaluate an existing metric of constraints on nutrient availability, and (iii) adjust this metric for boreal forest data. With (iii), we thus aimed to provide an adjustable nutrient metric, applicable for Sweden and with potential for elaboration to other regions. While taking into account confounding factors such as climate, N deposition, and soil oxygen availability, our analyses revealed that the soil organic carbon concentration (SOC) and the ratio of soil carbon to nitrogen (C : N) were the most important factors explaining variation in normalized
(climate-independent) productivity (mean annual volume increment - m3 ha-1 yr-1) across Sweden. Normalized forest productivity was significantly negatively related to the soil C : N ratio (R2 = 0.02-0.13), while SOC exhibited an empirical optimum (R2 = 0.05-0.15). For the metric, we started from a (yet unvalidated) metric for constraints on nutrient availability that was previously developed by the International Institute for Applied Systems Analysis (IIASA - Laxenburg, Austria) for evaluating potential productivity of arable land. This IIASA metric requires information on soil properties that are indicative of nutrient availability (SOC, soil texture, total exchangeable bases - TEB, and pH) and is based on theoretical considerations that are also generally valid for nonagricultural ecosystems. However, the IIASA metric was unrelated to normalized forest productivity across Sweden (R2 = 0.00-0.01) because the soil factors under consideration were not optimally implemented according to the Swedish data, and because the soil C : N ratio was not included. Using two methods (each one based on a different way of normalizing productivity for climate), we adjusted this metric by incorporating soil C : N and modifying the relationship between SOC and nutrient availability in view of the observed relationships across our database. In contrast to the IIASA metric, the adjusted metrics explained some variation in normalized productivity in the database (R2 = 0.03-0.21; depending on the applied method). A test for five manually selected local fertility gradients in our database revealed a significant and stronger relationship between the adjusted metrics and productivity for each of the gradients (R2 = 0.09-0.38). This study thus shows for the first time how nutrient availability metrics can be evaluated and adjusted for a particular ecosystem type, using a large-scale database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Oakley, Brian; Worrall, Andrew
2015-01-01
Abstract One of the key objectives of the U.S. Department of Energy (DOE) Nuclear Energy R&D Roadmap is the development of sustainable nuclear fuel cycles that can improve natural resource utilization and provide solutions to the management of nuclear wastes. Recently, an evaluation and screening (E&S) of fuel cycle systems has been conducted to identify those options that provide the best opportunities for obtaining such improvements and also to identify the required research and development activities that can support the development of advanced fuel cycle options. In order to evaluate and screen the E&S study included nine criteria including Developmentmore » and Deployment Risk (D&DR). More specifically, this criterion was represented by the following metrics: Development time, development cost, deployment cost from prototypic validation to first-of-a-kind commercial, compatibility with the existing infrastructure, existence of regulations for the fuel cycle and familiarity with licensing, and existence of market incentives and/or barriers to commercial implementation of fuel cycle processes. Given the comprehensive nature of the study, a systematic approach was needed to determine metric data for the D&DR criterion, and is presented here. As would be expected, the Evaluation Group representing the once-through use of uranium in thermal reactors is always the highest ranked fuel cycle Evaluation Group for this D&DR criterion. Evaluation Groups that consist of once-through fuel cycles that use existing reactor types are consistently ranked very high. The highest ranked limited and continuous recycle fuel cycle Evaluation Groups are those that recycle Pu in thermal reactors. The lowest ranked fuel cycles are predominately continuous recycle single stage and multi-stage fuel cycles that involve TRU and/or U-233 recycle.« less
75 FR 7581 - RTO/ISO Performance Metrics; Notice Requesting Comments on RTO/ISO Performance Metrics
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-22
... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... of staff from all the jurisdictional ISOs/RTOs to develop a set of performance metrics that the ISOs/RTOs will use to report annually to the Commission. Commission staff and representatives from the ISOs...
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
NASA Astrophysics Data System (ADS)
Ramaswami, Anu; Chavez, Abel
2013-09-01
Three broad approaches have emerged for energy and greenhouse gas (GHG) accounting for individual cities: (a) purely in-boundary source-based accounting (IB); (b) community-wide infrastructure GHG emissions footprinting (CIF) incorporating life cycle GHGs (in-boundary plus trans-boundary) of key infrastructures providing water, energy, food, shelter, mobility-connectivity, waste management/sanitation and public amenities to support community-wide activities in cities—all resident, visitor, commercial and industrial activities; and (c) consumption-based GHG emissions footprints (CBF) incorporating life cycle GHGs associated with activities of a sub-set of the community—its final consumption sector dominated by resident households. The latter two activity-based accounts are recommended in recent GHG reporting standards, to provide production-dominated and consumption perspectives of cities, respectively. Little is known, however, on how to normalize and report the different GHG numbers that arise for the same city. We propose that CIF and IB, since they incorporate production, are best reported per unit GDP, while CBF is best reported per capita. Analysis of input-output models of 20 US cities shows that GHGCIF/GDP is well suited to represent differences in urban energy intensity features across cities, while GHGCBF/capita best represents variation in expenditures across cities. These results advance our understanding of the methods and metrics used to represent the energy and GHG performance of cities.
Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Crawford, Aladsair J.; Fuller, Jason C.
The Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems (PNNL-22010) was first issued in November 2012 as a first step toward providing a foundational basis for developing an initial standard for the uniform measurement and expression of energy storage system (ESS) performance. Based on experiences with the application and use of that document, and to include additional ESS applications and associated duty cycles, test procedures and performance metrics, a first revision of the November 2012 Protocol was issued in June 2014 (PNNL 22010 Rev. 1). As an update of the 2014 revision 1 to the Protocol,more » this document (the March 2016 revision 2 to the Protocol) is intended to supersede the June 2014 revision 1 to the Protocol and provide a more user-friendly yet more robust and comprehensive basis for measuring and expressing ESS performance.« less
NASA Astrophysics Data System (ADS)
Jerome, N. P.; Orton, M. R.; d'Arcy, J. A.; Feiweier, T.; Tunariu, N.; Koh, D.-M.; Leach, M. O.; Collins, D. J.
2015-01-01
Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.
Jerome, N P; Orton, M R; d'Arcy, J A; Feiweier, T; Tunariu, N; Koh, D-M; Leach, M O; Collins, D J
2015-01-21
Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.
Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans
2017-01-01
A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.
Uncooperative target-in-the-loop performance with backscattered speckle-field effects
NASA Astrophysics Data System (ADS)
Kansky, Jan E.; Murphy, Daniel V.
2007-09-01
Systems utilizing target-in-the-loop (TIL) techniques for adaptive optics phase compensation rely on a metric sensor to perform a hill climbing algorithm that maximizes the far-field Strehl ratio. In uncooperative TIL, the metric signal is derived from the light backscattered from a target. In cases where the target is illuminated with a laser with suffciently long coherence length, the potential exists for the validity of the metric sensor to be compromised by speckle-field effects. We report experimental results from a scaled laboratory designed to evaluate TIL performance in atmospheric turbulence and thermal blooming conditions where the metric sensors are influenced by varying degrees of backscatter speckle. We compare performance of several TIL configurations and metrics for cases with static speckle, and for cases with speckle fluctuations within the frequency range that the TIL system operates. The roles of metric sensor filtering and system bandwidth are discussed.
Impact of Different Economic Performance Metrics on the Perceived Value of Solar Photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, E.; Denholm, P.; Margolis, R.
2011-10-01
Photovoltaic (PV) systems are installed by several types of market participants, ranging from residential customers to large-scale project developers and utilities. Each type of market participant frequently uses a different economic performance metric to characterize PV value because they are looking for different types of returns from a PV investment. This report finds that different economic performance metrics frequently show different price thresholds for when a PV investment becomes profitable or attractive. Several project parameters, such as financing terms, can have a significant impact on some metrics [e.g., internal rate of return (IRR), net present value (NPV), and benefit-to-cost (B/C)more » ratio] while having a minimal impact on other metrics (e.g., simple payback time). As such, the choice of economic performance metric by different customer types can significantly shape each customer's perception of PV investment value and ultimately their adoption decision.« less
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Morgans, Aimee S.
2016-01-01
Combustion instabilities arise owing to a two-way coupling between acoustic waves and unsteady heat release. Oscillation amplitudes successively grow, until nonlinear effects cause saturation into limit cycle oscillations. Feedback control, in which an actuator modifies some combustor input in response to a sensor measurement, can suppress combustion instabilities. Linear feedback controllers are typically designed, using linear combustor models. However, when activated from within limit cycle, the linear model is invalid, and such controllers are not guaranteed to stabilize. This work develops a feedback control strategy guaranteed to stabilize from within limit cycle oscillations. A low-order model of a simple combustor, exhibiting the essential features of more complex systems, is presented. Linear plane acoustic wave modelling is combined with a weakly nonlinear describing function for the flame. The latter is determined numerically using a level set approach. Its implication is that the open-loop transfer function (OLTF) needed for controller design varies with oscillation level. The difference between the mean and the rest of the OLTFs is characterized using the ν-gap metric, providing the minimum required ‘robustness margin’ for an H∞ loop-shaping controller. Such controllers are designed and achieve stability both for linear fluctuations and from within limit cycle oscillations. PMID:27493558
Divergence of feeding channels within the soil food web determined by ecosystem type.
Crotty, Felicity V; Blackshaw, Rod P; Adl, Sina M; Inger, Richard; Murray, Philip J
2014-01-01
Understanding trophic linkages within the soil food web (SFW) is hampered by its opacity, diversity, and limited niche adaptation. We need to expand our insight between the feeding guilds of fauna and not just count biodiversity. The soil fauna drive nutrient cycling and play a pivotal, but little understood role within both the carbon (C) and nitrogen (N) cycles that may be ecosystem dependent. Here, we define the structure of the SFW in two habitats (grassland and woodland) on the same soil type and test the hypothesis that land management would alter the SFW in these habitats. To do this, we census the community structure and use stable isotope analysis to establish the pathway of C and N through each trophic level within the ecosystems. Stable isotope ratios of C and N from all invertebrates were used as a proxy for trophic niche, and community-wide metrics were obtained. Our empirically derived C/N ratios differed from those previously reported, diverging from model predictions of global C and N cycling, which was unexpected. An assessment of the relative response of the different functional groups to the change from agricultural grassland to woodland was performed. This showed that abundance of herbivores, microbivores, and micropredators were stimulated, while omnivores and macropredators were inhibited in the grassland. Differences between stable isotope ratios and community-wide metrics, highlighted habitats with similar taxa had different SFWs, using different basal resources, either driven by root or litter derived resources. Overall, we conclude that plant type can act as a top-down driver of community functioning and that differing land management can impact on the whole SFW.
Compression performance comparison in low delay real-time video for mobile applications
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2012-10-01
This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.
Spatial-temporal forecasting the sunspot diagram
NASA Astrophysics Data System (ADS)
Covas, Eurico
2017-09-01
Aims: We attempt to forecast the Sun's sunspot butterfly diagram in both space (I.e. in latitude) and time, instead of the usual one-dimensional time series forecasts prevalent in the scientific literature. Methods: We use a prediction method based on the non-linear embedding of data series in high dimensions. We use this method to forecast both in latitude (space) and in time, using a full spatial-temporal series of the sunspot diagram from 1874 to 2015. Results: The analysis of the results shows that it is indeed possible to reconstruct the overall shape and amplitude of the spatial-temporal pattern of sunspots, but that the method in its current form does not have real predictive power. We also apply a metric called structural similarity to compare the forecasted and the observed butterfly cycles, showing that this metric can be a useful addition to the usual root mean square error metric when analysing the efficiency of different prediction methods. Conclusions: We conclude that it is in principle possible to reconstruct the full sunspot butterfly diagram for at least one cycle using this approach and that this method and others should be explored since just looking at metrics such as sunspot count number or sunspot total area coverage is too reductive given the spatial-temporal dynamical complexity of the sunspot butterfly diagram. However, more data and/or an improved approach is probably necessary to have true predictive power.
Tang, Tao; Stevenson, R Jan; Infante, Dana M
2016-10-15
Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.
GPS Data Filtration Method for Drive Cycle Analysis Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Earleywine, M.
2013-02-01
When employing GPS data acquisition systems to capture vehicle drive-cycle information, a number of errors often appear in the raw data samples, such as sudden signal loss, extraneous or outlying data points, speed drifting, and signal white noise, all of which limit the quality of field data for use in downstream applications. Unaddressed, these errors significantly impact the reliability of source data and limit the effectiveness of traditional drive-cycle analysis approaches and vehicle simulation software. Without reliable speed and time information, the validity of derived metrics for drive cycles, such as acceleration, power, and distance, become questionable. This study exploresmore » some of the common sources of error present in raw onboard GPS data and presents a detailed filtering process designed to correct for these issues. Test data from both light and medium/heavy duty applications are examined to illustrate the effectiveness of the proposed filtration process across the range of vehicle vocations. Graphical comparisons of raw and filtered cycles are presented, and statistical analyses are performed to determine the effects of the proposed filtration process on raw data. Finally, an evaluation of the overall benefits of data filtration on raw GPS data and present potential areas for continued research is presented.« less
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Life-cycle GHG emissions of electricity from syngas produced by pyrolyzing woody biomass
Hongmei Gu; Richard Bergman
2015-01-01
Low-value residues from forest restoration activities in the western United States intended to mitigate effects from wildfire, climate change, and pests and disease need a sustainable market to improve the economic viability of treatment. Converting biomass into bioenergy is a potential solution. Life-cycle assessment (LCA) as a sustainable metric tool can assess the...
A Simplified Model for Detonation Based Pressure-Gain Combustors
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2010-01-01
A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.
Neuro-Mechanics of Recumbent Leg Cycling in Post-Acute Stroke Patients.
Ambrosini, Emilia; De Marchis, Cristiano; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Schmid, Maurizio; D'Alessio, Tommaso; Conforto, Silvia; Ferrante, Simona
2016-11-01
Cycling training is strongly applied in post-stroke rehabilitation, but how its modular control is altered soon after stroke has been not analyzed yet. EMG signals from 9 leg muscles and pedal forces were measured bilaterally during recumbent pedaling in 16 post-acute stroke patients and 12 age-matched healthy controls. Patients were asked to walk over a GaitRite mat and standard gait parameters were computed. Four muscle synergies were extracted through nonnegative matrix factorization in healthy subjects and patients unaffected legs. Two to four synergies were identified in the affected sides and the number of synergies significantly correlated with the Motricity Index (Spearman's coefficient = 0.521). The reduced coordination complexity resulted in a reduced biomechanical performance, with the two-module sub-group showing the lowest work production and mechanical effectiveness in the affected side. These patients also exhibited locomotor impairments (reduced gait speed, asymmetrical stance time, prolonged double support time). Significant correlations were found between cycling-based metrics and gait parameters, suggesting that neuro-mechanical quantities of pedaling can inform on walking dysfunctions. Our findings support the use of pedaling as a rehabilitation method and an assessment tool after stroke, mainly in the early phase, when patients can be unable to perform a safe and active gait training.
A Case Study Based Analysis of Performance Metrics for Green Infrastructure
NASA Astrophysics Data System (ADS)
Gordon, B. L.; Ajami, N.; Quesnel, K.
2017-12-01
Aging infrastructure, population growth, and urbanization are demanding new approaches to management of all components of the urban water cycle, including stormwater. Traditionally, urban stormwater infrastructure was designed to capture and convey rainfall-induced runoff out of a city through a network of curbs, gutters, drains, and pipes, also known as grey infrastructure. These systems were planned with a single-purpose and designed under the assumption of hydrologic stationarity, a notion that no longer holds true in the face of a changing climate. One solution gaining momentum around the world is green infrastructure (GI). Beyond stormwater quality improvement and quantity reduction (or technical benefits), GI solutions offer many environmental, economic, and social benefits. Yet many practical barriers have prevented the widespread adoption of these systems worldwide. At the center of these challenges is the inability of stakeholders to know how to monitor, measure, and assess the multi-sector performance of GI systems. Traditional grey infrastructure projects require different monitoring strategies than natural systems; there are no overarching policies on how to best design GI monitoring and evaluation systems and measure performance. Previous studies have attempted to quantify the performance of GI, mostly using one evaluation method on a specific case study. We use a case study approach to address these knowledge gaps and develop a conceptual model of how to evaluate the performance of GI through the lens of financing. First, we examined many different case studies of successfully implemented GI around the world. Then we narrowed in on 10 exemplary case studies. For each case studies, we determined what performance method the project developer used such as LCA, TBL, Low Impact Design Assessment (LIDA) and others. Then, we determined which performance metrics were used to determine success and what data was needed to calculate those metrics. Finally, we examine risk priorities of both public and private actors to see how they varied and how risk was overcome. We synthesized these results to pull out key themes and lessons for the future. If project implementers are able to quantify the benefits and show investors how beneficial these systems can be, more will be implemented in the future.
Grading the Metrics: Performance-Based Funding in the Florida State University System
ERIC Educational Resources Information Center
Cornelius, Luke M.; Cavanaugh, Terence W.
2016-01-01
A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…
Edwards, N
2008-10-01
The international introduction of performance-based building codes calls for a re-examination of indicators used to monitor their implementation. Indicators used in the building sector have a business orientation, target the life cycle of buildings, and guide asset management. In contrast, indicators used in the health sector focus on injury prevention, have a behavioural orientation, lack specificity with respect to features of the built environment, and do not take into account patterns of building use or building longevity. Suggestions for metrics that bridge the building and health sectors are discussed. The need for integrated surveillance systems in health and building sectors is outlined. It is time to reconsider commonly used epidemiological indicators in the field of injury prevention and determine their utility to address the accountability requirements of performance-based codes.
Development of a perceptually calibrated objective metric of noise
NASA Astrophysics Data System (ADS)
Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey
2011-01-01
A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.
Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F
2012-05-01
The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=-2.487 (-2.040 to -0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=-2.272 (-0.028 to -0.002). ANOVA reported significant differences across years of experience (0-1, 1-2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.
Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance
NASA Technical Reports Server (NTRS)
Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.
2010-01-01
PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.
Benefits of utilizing CellProfiler as a characterization tool for U–10Mo nuclear fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; Douglas, J.; Patterson, L.
2015-07-15
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium–molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries. - Graphical abstract: Display Omitted - Highlights: • A technique is developed to score U–10Mo FIB-SEM image quality using CellProfiler. • The pass/fail metric is based on image illumination, focus, and area scratched. • Automated image analysis is performed in pipeline fashion to characterize images. • Fission gas void, interaction layer, and grain boundary coverage data is extracted. • Preliminary characterization results demonstrate consistency of the algorithm.« less
Zhu, Jason; Zhang, Tian; Shah, Radhika; Kamal, Arif H; Kelley, Michael J
2015-12-01
Quality improvement measures are uniformly applied to all oncology providers, regardless of their roles. Little is known about differences in adherence to these measures between oncology fellows, advance practice providers (APP), and attending physicians. We investigated conformance across Quality Oncology Practice Initiative (QOPI) measures for oncology fellows, advance practice providers, and attending physicians at the Durham Veterans Affairs Medical Center (DVAMC). Using data collected from the Spring 2012 and 2013 QOPI cycles, we abstracted charts of patients and separated them based on their primary provider. Descriptive statistics and the chi-square test were calculated for each QOPI measure between fellows, advanced practice providers (APPs), and attending physicians. A total of 169 patients were reviewed. Of these, 31 patients had a fellow, 39 had an APP, and 99 had an attending as their primary oncology provider. Fellows and attending physicians performed similarly on 90 of 94 QOPI metrics. High-performing metrics included several core QOPI measures including documenting consent for chemotherapy, recommending adjuvant chemotherapy when appropriate, and prescribing serotonin antagonists when prescribing emetogenic chemotherapies. Low-performing metrics included documentation of treatment summary and taking action to address problems with emotional well-being by the second office visit. Attendings documented the plan for oral chemotherapy more often (92 vs. 63%, P=0.049). However, after the chart audit, we found that fellows actually documented the plan for oral chemotherapy 88% of the time (p=0.73). APPs and attendings performed similarly on 88 of 90 QOPI measures. The quality of oncology care tends to be similar between attendings and fellows overall; some of the significant differences do not remain significant after a second manual chart review, highlighting that the use of manual data collection for QOPI analysis is an imperfect system, and there may be significant inter-observer variability.
Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
NASA Astrophysics Data System (ADS)
Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
Improving Climate Projections Using "Intelligent" Ensembles
NASA Technical Reports Server (NTRS)
Baker, Noel C.; Taylor, Patrick C.
2015-01-01
Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
MO-AB-BRA-05: [18F]NaF PET/CT Imaging Biomarkers in Metastatic Prostate Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Perk, T; Lin, C
Purpose: Clinical use of {sup 18}F-Sodium Fluoride (NaF) PET/CT in metastatic settings often lacks technology to quantitatively measure full disease dynamics due to high tumor burden. This study assesses radiomics-based extraction of NaF PET/CT measures, including global metrics of overall burden and local metrics of disease heterogeneity, in metastatic prostate cancer for correlation to clinical outcomes. Methods: Fifty-six metastatic Castrate-Resistant Prostate Cancer (mCRPC) patients had NaF PET/CT scans performed at baseline and three cycles into chemotherapy (N=16) or androgen-receptor (AR) inhibitors (N=39). A novel technology, Quantitative Total Bone Imaging (QTBI), was used for analysis. Employing hybrid PET/CT segmentation and articulatedmore » skeletal-registration, QTBI allows for response assessment of individual lesions. Various SUV metrics were extracted from each lesion (iSUV). Global metrics were extracted from composite lesion-level statistics for each patient (pSUV). Proportion of detected lesions and those with significant response (%-increase or %-decrease) was calculated for each patient based on test-retest limits for iSUV metrics. Cox proportional hazard regression analyses were conducted between imaging metrics and progression-free survival (PFS). Results: Functional burden (pSUV{sub total}) assessed mid-treatment was the strongest univariate predictor of PFS (HR=2.03; p<0.0001). Various global metrics outperformed baseline clinical markers, including fraction of skeletal burden, mean uptake (pSUV{sub mean}), and heterogeneity of average lesion uptake (pSUV{sub hetero}). Of 43 patients with paired baseline/mid-treatment imaging, 40 showed heterogeneity in lesion-level response, containing populations of lesions with both increasing/decreasing metrics. Proportion of lesions with significantly increasing iSUV{sub mean} was highly predictive of clinical PFS (HR=2.0; p=0.0002). Patients exhibiting higher proportion of lesions with decreasing iSUV{sub total} saw prolonged radiographic PFS (HR=0.51; p=0.02). Conclusion: Technology presented here provides comprehensive disease quantification on NaF PET/CT imaging, showing strong correlation to clinical outcomes. Total functional burden as well as proportions of similarly responding lesions was predictive of PFS. This supports ongoing development of NaF PET/CT based imaging biomarkers in mCRPC. Prostate Cancer Foundation.« less
Wind Prediction Accuracy for Air Traffic Management Decision Support Tools
NASA Technical Reports Server (NTRS)
Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan
2000-01-01
The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an optimal interpolation of the latest ACARS reports since the RUC run. This paper presents an overview of the study's results including the identification and use of new large mor wind-prediction accuracy metrics that are key to ATM DST performance.
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Arnold, James O. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.
Climate Classification is an Important Factor in Assessing Hospital Performance Metrics
NASA Astrophysics Data System (ADS)
Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.
2017-12-01
Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.
Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel
Collette, R.; Douglas, J.; Patterson, L.; ...
2015-05-01
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less
Life-Cycle Assessment of Biodiesel Produced from Grease Trap Waste.
Hums, Megan E; Cairncross, Richard A; Spatari, Sabrina
2016-03-01
Grease trap waste (GTW) is a low-quality waste material with variable lipid content that is an untapped resource for producing biodiesel. Compared to conventional biodiesel feedstocks, GTW requires different and additional processing steps for biodiesel production due to its heterogeneous composition, high acidity, and high sulfur content. Life-cycle assessment (LCA) is used to quantify greenhouse gas emissions, fossil energy demand, and criteria air pollutant emissions for the GTW-biodiesel process, in which the sensitivity to lipid concentration in GTW is analyzed using Monte Carlo simulation. The life-cycle environmental performance of GTW-biodiesel is compared to that of current GTW disposal, the soybean-biodiesel process, and low-sulfur diesel (LSD). The disposal of the water and solid wastes produced from separating lipids from GTW has a high contribution to the environmental impacts; however, the impacts of these processed wastes are part of the current disposal practice for GTW and could be excluded with consequential LCA system boundaries. At lipid concentrations greater than 10%, most of the environmental metrics studied are lower than those of LSD and comparable to soybean biodiesel.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale
Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyang; Friedl, Mark A.; Schaaf, Crystal B.
2006-12-01
In the last two decades the availability of global remote sensing data sets has provided a new means of studying global patterns and dynamics in vegetation. The vast majority of previous work in this domain has used data from the Advanced Very High Resolution Radiometer, which until recently was the primary source of global land remote sensing data. In recent years, however, a number of new remote sensing data sources have become available that have significantly improved the capability of remote sensing to monitor global ecosystem dynamics. In this paper, we describe recent results using data from NASA's Moderate Resolution Imaging Spectroradiometer to study global vegetation phenology. Using a novel new method based on fitting piecewise logistic models to time series data from MODIS, key transition dates in the annual cycle(s) of vegetation growth can be estimated in an ecologically realistic fashion. Using this method we have produced global maps of seven phenological metrics at 1-km spatial resolution for all ecosystems exhibiting identifiable annual phenologies. These metrics include the date of year for (1) the onset of greenness increase (greenup), (2) the onset of greenness maximum (maturity), (3) the onset of greenness decrease (senescence), and (4) the onset of greenness minimum (dormancy). The three remaining metrics are the growing season minimum, maximum, and summation of the enhanced vegetation index derived from MODIS. Comparison of vegetation phenology retrieved from MODIS with in situ measurements shows that these metrics provide realistic estimates of the four transition dates identified above. More generally, the spatial distribution of phenological metrics estimated from MODIS data is qualitatively realistic, and exhibits strong correspondence with temperature patterns in mid- and high-latitude climates, with rainfall seasonality in seasonally dry climates, and with cropping patterns in agricultural areas.
Zhang, Huyi; Li, Haitao; Song, Wei; Shen, Diandian; Skanchy, David; Shen, Kun; Lionberger, Robert A; Rosencrance, Susan M; Yu, Lawrence X
2014-09-01
Under the Generic Drug User Fee Amendments (GDUFA) of 2012, Type II active pharmaceutical ingredient (API) drug master files (DMFs) must pay a user fee and pass a Completeness Assessment (CA) before they can be referenced in an Abbreviated New Drug Application (ANDA), ANDA amendment, or ANDA prior approval supplement (PAS). During the first year of GDUFA implementation, from October 1, 2012 to September 30, 2013, approximately 1,500 Type II API DMFs received at least one cycle of CA review and more than 1,100 Type II DMFs were deemed complete and published on FDA's "Available for Reference List". The data from CA reviews were analyzed for factors that influenced the CA review process and metrics, as well as the areas of DMF submissions which most frequently led to an incomplete CA status. The metrics analysis revealed that electronic DMFs appear to improve the completeness of submission and shorten both the review and response times. Utilizing the CA checklist to compile and proactively update the DMFs improves the chance for the DMFs to pass the CA in the first cycle. However, given that the majority of DMFs require at least two cycles of CA before being deemed complete, it is recommended that DMF fees are paid 6 months in advance of the ANDA submissions in order to avoid negatively impacting the filling status of the ANDAs.
Performance metrics for the assessment of satellite data products: an ocean color case study
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Performance Metrics for Soil Moisture Retrievals and Applications Requirements
USDA-ARS?s Scientific Manuscript database
Quadratic performance metrics such as root-mean-square error (RMSE) and time series correlation are often used to assess the accuracy of geophysical retrievals and true fields. These metrics are generally related; nevertheless each has advantages and disadvantages. In this study we explore the relat...
Demonstrating the Environmental & Economic Cost-Benefits of Reusing DoD’s Pre-World War II Buildings
2013-04-01
IV-1 Table IV-2: Summary Results PO1, NPV of Life Cycle Costs wirhout Factoring GHGs ......... IV...3 Table IV-3: Summary Results PO1, NPV of Life Cycle Costs with Monetized GHGs ............. IV-4 Table IV-4: Construction Cost Comparisons...IV-6 Table IV-6: Summary Results PO2, GHG Reductions in Metric Tons by Scope
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.
Thorium Fuel Cycle Option Screening in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taiwo, Temitope A.; Kim, Taek K.; Wigeland, Roald A.
2016-05-01
As part of a nuclear fuel cycle Evaluation and Screening (E&S) study, a wide-range of thorium fuel cycle options were evaluated and their performance characteristics and challenges to implementation were compared to those of other nuclear fuel cycle options based on criteria specified by the Nuclear Energy Office of the U.S. Department of Energy (DOE). The evaluated nuclear fuel cycles included the once-through, limited, and continuous recycle options using critical or externally-driven nuclear energy systems. The E&S study found that the continuous recycle of 233U/Th in fuel cycles using either thermal or fast reactors is an attractive promising fuel cyclemore » option with high effective fuel resource utilization and low waste generation, but did not perform quite as well as the continuous recycle of Pu/U using a fast critical system, which was identified as one of the most promising fuel cycle options in the E&S study. This is because compared to their uranium counterparts the thorium-based systems tended to have higher radioactivity in the short term (about 100 years post irradiation) because of differences in the fission product yield curves, and in the long term (100,000 years post irradiation) because of the decay of 233U and daughters, and because of higher mass flow rates due to lower discharge burnups. Some of the thorium-based systems also require enriched uranium support, which tends to be detrimental to resource utilization and waste generation metrics. Finally, similar to the need for developing recycle fuel fabrication, fuels separations and fast reactors for the most promising options using Pu/U recycle, the future thorium-based fuel cycle options with continuous recycle would also require such capabilities, although their deployment challenges are expected to be higher since such facilities have not been developed in the past to a comparable level of maturity for Th-based systems.« less
Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F
2012-01-01
Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.
Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K
2018-06-15
The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, F; Jeudy, J; Tseng, H
Purpose: To investigate the incorporation of pre-therapy regional ventilation function in predicting radiation fibrosis (RF) in stage III non-small-cell lung cancer (NSCLC) patients treated with concurrent thoracic chemoradiotherapy. Methods: 37 stage III NSCLC patients were retrospectively studied. Patients received one cycle of cisplatin-gemcitabine, followed by two to three cycles of cisplatin-etoposide concurrently with involved-field thoracic radiotherapy between 46 and 66 Gy (2 Gy per fraction). Pre-therapy regional ventilation images of the lung were derived from 4DCT via a density-change-based image registration algorithm with mass correction. RF was evaluated at 6-months post-treatment using radiographic scoring based on airway dilation and volumemore » loss. Three types of ipsilateral lung metrics were studied: (1) conventional dose-volume metrics (V20, V30, V40, and mean-lung-dose (MLD)), (2) dose-function metrics (fV20, fV30, fV40, and functional mean-lung-dose (fMLD) generated by combining regional ventilation and dose), and (3) dose-subvolume metrics (sV20, sV30, sV40, and subvolume mean-lung-dose (sMLD) defined as the dose-volume metrics computed on the sub-volume of the lung with at least 60% of the quantified maximum ventilation status). Receiver operating characteristic (ROC) curve analysis and logistic regression analysis were used to evaluate the predictability of these metrics for RF. Results: In predicting airway dilation, the area under the ROC curve (AUC) values for (V20, MLD), (fV20, fMLD), and (sV20, and sMLD) were (0.76, 0.70), (0.80, 0.74) and (0.82, 0.80), respectively. The logistic regression p-values were (0.09, 0.18), (0.02, 0.05) and (0.004, 0.006), respectively. With regard to volume loss, the corresponding AUC values for these metrics were (0.66, 0.57), (0.67, 0.61) and (0.71, 0.69), and p-values were (0.95, 0.90), (0.43, 0.64) and (0.08, 0.12), respectively. Conclusion: The inclusion of regional ventilation function improved predictability of radiation fibrosis. Dose-subvolume metrics provided a promising method for incorporating functional information into the conventional dose-volume parameters for outcome assessment.« less
Systems Engineering Techniques for ALS Decision Making
NASA Technical Reports Server (NTRS)
Rodriquez, Luis F.; Drysdale, Alan E.; Jones, Harry; Levri, Julie A.
2004-01-01
The Advanced Life Support (ALS) Metric is the predominant tool for predicting the cost of ALS systems. Metric goals for the ALS Program are daunting, requiring a threefold increase in the ALS Metric by 2010. Confounding the problem, the rate new ALS technologies reach the maturity required for consideration in the ALS Metric and the rate at which new configurations are developed is slow, limiting the search space and potentially giving the perspective of a ALS technology, the ALS Metric may remain elusive. This paper is a sequel to a paper published in the proceedings of the 2003 ICES conference entitled, "Managing to the metric: an approach to optimizing life support costs." The conclusions of that paper state that the largest contributors to the ALS Metric should be targeted by ALS researchers and management for maximum metric reductions. Certainly, these areas potentially offer large potential benefits to future ALS missions; however, the ALS Metric is not the only decision-making tool available to the community. To facilitate decision-making within the ALS community a combination of metrics should be utilized, such as the Equivalent System Mass (ESM)-based ALS metric, but also those available through techniques such as life cycle costing and faithful consideration of the sensitivity of the assumed models and data. Often a lack of data is cited as the reason why these techniques are not considered for utilization. An existing database development effort within the ALS community, known as OPIS, may provide the opportunity to collect the necessary information to enable the proposed systems analyses. A review of these additional analysis techniques is provided, focusing on the data necessary to enable these. The discussion is concluded by proposing how the data may be utilized by analysts in the future.
Sleep Disturbance in Female Flight Attendants and Teachers.
Grajewski, Barbara; Whelan, Elizabeth A; Nguyen, Mimi M; Kwan, Lorna; Cole, Roger J
2016-07-01
Flight attendants (FAs) may experience circadian disruption due to travel during normal sleep hours and through multiple time zones. This study investigated whether FAs are at higher risk for sleep disturbance compared to teachers, as assessed by questionnaire, diary, and activity monitors. Sleep/wake cycles of 45 FAs and 25 teachers were studied. For one menstrual cycle, participants wore an activity monitor and kept a daily diary. Sleep metrics included total sleep in the main sleep period (MSP), sleep efficiency (proportion of MSP spent sleeping), and nocturnal sleep fraction (proportion of sleep between 10 p.m. to 8 a.m. home time). Relationships between sleep metrics and occupation were analyzed with mixed and generalized linear models. Both actigraph and diary data suggest that FAs sleep longer than teachers. However, several actigraph indices of sleep disturbance indicated that FAs incurred significant impairment of sleep compared to teachers. FAs were more likely than teachers to have poor sleep efficiency [adjusted odds ratio (OR) for lowest quartile of sleep efficiency = 1.9, 95% Confidence Interval (CI) 1.2 - 3.0] and to have a smaller proportion of their sleep between 10 p.m. and 8 a.m. home time (adjusted OR for lowest quartile of nocturnal sleep fraction = 3.1, CI 1.1 -9.0). Study FAs experienced increased sleep disturbance compared to teachers, which may indicate circadian disruption. Grajewski B, Whelan EA, Nguyen MM, Kwan L, Cole RJ. Sleep disturbance in female flight attendants and teachers. Aerosp Med Hum Perform. 2016; 87(7)638-645.
A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2011-01-01
Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed
Application of Domain Knowledge to Software Quality Assurance
NASA Technical Reports Server (NTRS)
Wild, Christian W.
1997-01-01
This work focused on capturing, using, and evolving a qualitative decision support structure across the life cycle of a project. The particular application of this study was towards business process reengineering and the representation of the business process in a set of Business Rules (BR). In this work, we defined a decision model which captured the qualitative decision deliberation process. It represented arguments both for and against proposed alternatives to a problem. It was felt that the subjective nature of many critical business policy decisions required a qualitative modeling approach similar to that of Lee and Mylopoulos. While previous work was limited almost exclusively to the decision capture phase, which occurs early in the project life cycle, we investigated the use of such a model during the later stages as well. One of our significant developments was the use of the decision model during the operational phase of a project. By operational phase, we mean the phase in which the system or set of policies which were earlier decided are deployed and put into practice. By making the decision model available to operational decision makers, they would have access to the arguments pro and con for a variety of actions and can thus make a more informed decision which balances the often conflicting criteria by which the value of action is measured. We also developed the concept of a 'monitored decision' in which metrics of performance were identified during the decision making process and used to evaluate the quality of that decision. It is important to monitor those decision which seem at highest risk of not meeting their stated objectives. Operational decisions are also potentially high risk decisions. Finally, we investigated the use of performance metrics for monitored decisions and audit logs of operational decisions in order to feed an evolutionary phase of the the life cycle. During evolution, decisions are revisisted, assumptions verified or refuted, and possible reassessments resulting in new policy are made. In this regard we implemented a machine learning algorithm which automatically defined business rules based on expert assessment of the quality of operational decisions as recorded during deployment.
Cripton, Peter A; Shen, Hui; Brubacher, Jeff R; Chipman, Mary; Friedman, Steven M; Harris, M Anne; Winters, Meghan; Reynolds, Conor C O; Cusimano, Michael D; Babul, Shelina; Teschke, Kay
2015-01-01
Objective To examine the relationship between cycling injury severity and personal, trip, route and crash characteristics. Methods Data from a previous study of injury risk, conducted in Toronto and Vancouver, Canada, were used to classify injury severity using four metrics: (1) did not continue trip by bike; (2) transported to hospital by ambulance; (3) admitted to hospital; and (4) Canadian Triage and Acuity Scale (CTAS). Multiple logistic regression was used to examine associations with personal, trip, route and crash characteristics. Results Of 683 adults injured while cycling, 528 did not continue their trip by bike, 251 were transported by ambulance and 60 were admitted to hospital for further treatment. Treatment urgencies included 75 as CTAS=1 or 2 (most medically urgent), 284 as CTAS=3, and 320 as CTAS=4 or 5 (least medically urgent). Older age and collision with a motor vehicle were consistently associated with increased severity in all four metrics and statistically significant in three each (both variables with ambulance transport and CTAS; age with hospital admission; and motor vehicle collision with did not continue by bike). Other factors were consistently associated with more severe injuries, but statistically significant in one metric each: downhill grades; higher motor vehicle speeds; sidewalks (these significant for ambulance transport); multiuse paths and local streets (both significant for hospital admission). Conclusions In two of Canada's largest cities, about one-third of the bicycle crashes were collisions with motor vehicles and the resulting injuries were more severe than in other crash circumstances, underscoring the importance of separating cyclists from motor vehicle traffic. Our results also suggest that bicycling injury severity and injury risk would be reduced on facilities that minimise slopes, have lower vehicle speeds, and that are designed for bicycling rather than shared with pedestrians. PMID:25564148
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan
2014-12-01
Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
An Evaluation of the IntelliMetric[SM] Essay Scoring System
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine
2006-01-01
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sego, Landon H.; Marquez, Andres; Rawson, Andrew
2013-06-30
As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less
Ocean Carbon Cycle Feedbacks Under Negative Emissions
NASA Astrophysics Data System (ADS)
Schwinger, Jörg; Tjiputra, Jerry
2018-05-01
Negative emissions will most likely be needed to achieve ambitious climate targets, such as limiting global warming to 1.5°. Here we analyze the ocean carbon-concentration and carbon-climate feedback in an Earth system model under an idealized strong CO2 peak and decline scenario. We find that the ocean carbon-climate feedback is not reversible by means of negative emissions on decadal to centennial timescales. When preindustrial surface climate is restored, the oceans, due to the carbon-climate feedback, still contain about 110 Pg less carbon compared to a simulation without climate change. This result is unsurprising but highlights an issue with a widely used carbon cycle feedback metric. We show that this metric can be greatly improved by using ocean potential temperature as a proxy for climate change. The nonlinearity (nonadditivity) of climate and CO2-driven feedbacks continues to grow after the atmospheric CO2 peak.
Luk, Jason M; Pourbafrani, Mohammad; Saville, Bradley A; MacLean, Heather L
2013-09-17
Our study evaluates life cycle energy use and GHG emissions of lignocellulosic ethanol and bioelectricity use in U.S. light-duty vehicles. The well-to-pump, pump-to-wheel, and vehicle cycle stages are modeled. All ethanol (E85) and bioelectricity pathways have similar life cycle fossil energy use (~ 100 MJ/100 vehicle kilometers traveled (VKT)) and net GHG emissions (~5 kg CO2eq./100 VKT), considerably lower (65-85%) than those of reference gasoline and U.S. grid-electricity pathways. E85 use in a hybrid vehicle and bioelectricity use in a fully electric vehicle also have similar life cycle biomass and total energy use (~ 350 and ~450 MJ/100 VKT, respectively); differences in well-to-pump and pump-to-wheel efficiencies can largely offset each other. Our energy use and net GHG emissions results contrast with findings in literature, which report better performance on these metrics for bioelectricity compared to ethanol. The primary source of differences in the studies is related to our development of pathways with comparable vehicle characteristics. Ethanol or vehicle electrification can reduce petroleum use, while bioelectricity may displace nonpetroleum energy sources. Regional characteristics may create conditions under which either ethanol or bioelectricity may be the superior option; however, neither has a clear advantage in terms of GHG emissions or energy use.
Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip
2017-06-01
Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.
Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.
2003-01-01
Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.
NASA Technical Reports Server (NTRS)
1987-01-01
The detailed design of a small beam-powered trans-atmospheric vehicle, 'The Apollo Lightcraft,' was selected as the project for the design course. The vehicle has a lift-off gross weight of about six (6) metric tons and the capability to transport 500 kg of payload (five people plus spacesuits) to low Earth orbit. Beam power was limited to 10 gigawatts. The principal goal of this project is to reduce the low-Earth-orbit payload delivery cost by at least three orders of magnitude below the space shuttle orbiter--in the post 2020 era. The completely reusable, single-stage-to-orbit, shuttle craft will take off and land vertically, and have a reentry heat shield integrated with its lower surface--much like the Apollo command module. At the appropriate points along the launch trajectory, the combined cycle propulsion system will transition through three or four air breathing modes, and finally a pure rocket mode for orbital insertion. As with any revolutionary flight vehicle, engine development must proceed first. Hence, the objective for the spring semester propulsion course was to design and perform a detailed theoretical analysis on an advanced combined-cycle engine suitable for the Apollo Light craft. The analysis indicated that three air breathing cycles will be adequate for the mission, and that the ram jet cycle is unnecessary.
Structural texture similarity metrics for image analysis and retrieval.
Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L
2013-07-01
We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
Understanding what your sales manager is up against.
Trailer, Barry; Dickie, Jim
2006-01-01
Every year, the research firm CSO Insights publishes the results of its Sales Performance Optimization survey, an online questionnaire given to more than 1,000 sales executives worldwide that seeks to examine the effectiveness of key sales practices and metrics. In this article, two partners from CSO provide the 2005 and 2006 survey highlights, which describe the challenges today's sales organizations face and how they're responding. An overall theme is the degree to which the buy cycle has gotten out of sync with the sell cycle. Buyers have always had a buy cycle, starting at the point they perceive a need. Sellers have always had a sales cycle, starting at the point they spot a prospect. Traditionally, the two have dovetailed--either because the seller created the buyer's perception of need or because the buyer pursued a need by contacting a salesperson (often for product information). Now the buy cycle is often well under way before the seller is even aware there is a cycle--in part because of the information asymmetry created by the Internet. The implications for managing a sales organization are profound in that sales training must now address how reps handle an environment in which buyers have more knowledge than they do. The authors offer evidence that sales executives are taking--and should take--aggressive action to train and retain sales talent, manage the sales process, and use sales support technologies to meet the challenges of this new environment.
Statistical Characterization of School Bus Drive Cycles Collected via Onboard Logging Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Walkowicz, K.
In an effort to characterize the dynamics typical of school bus operation, National Renewable Energy Laboratory (NREL) researchers set out to gather in-use duty cycle data from school bus fleets operating across the country. Employing a combination of Isaac Instruments GPS/CAN data loggers in conjunction with existing onboard telemetric systems resulted in the capture of operating information for more than 200 individual vehicles in three geographically unique domestic locations. In total, over 1,500 individual operational route shifts from Washington, New York, and Colorado were collected. Upon completing the collection of in-use field data using either NREL-installed data acquisition devices ormore » existing onboard telemetry systems, large-scale duty-cycle statistical analyses were performed to examine underlying vehicle dynamics trends within the data and to explore vehicle operation variations between fleet locations. Based on the results of these analyses, high, low, and average vehicle dynamics requirements were determined, resulting in the selection of representative standard chassis dynamometer test cycles for each condition. In this paper, the methodology and accompanying results of the large-scale duty-cycle statistical analysis are presented, including graphical and tabular representations of a number of relationships between key duty-cycle metrics observed within the larger data set. In addition to presenting the results of this analysis, conclusions are drawn and presented regarding potential applications of advanced vehicle technology as it relates specifically to school buses.« less
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Adam W; Phillips, Caleb T; Perr-Sauer, Jordan
Under a collaborative interagency agreement between the U.S. Environmental Protection Agency and the U.S. Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) performed a series of in-depth analyses to characterize on-road driving behavior including distributions of vehicle speed, idle time, accelerations and decelerations, and other driving metrics of medium- and heavy-duty vocational vehicles operating within the United States. As part of this effort, NREL researchers segmented U.S. medium- and heavy-duty vocational vehicle driving characteristics into three distinct operating groups or clusters using real-world drive cycle data collected at 1 Hz and stored in NREL's Fleet DNA database. Themore » Fleet DNA database contains millions of miles of historical drive cycle data captured from medium- and heavy-duty vehicles operating across the United States. The data encompass existing DOE activities as well as contributions from valued industry stakeholder participants. For this project, data captured from 913 unique vehicles comprising 16,250 days of operation were drawn from the Fleet DNA database and examined. The Fleet DNA data used as a source for this analysis has been collected from a total of 30 unique fleets/data providers operating across 22 unique geographic locations spread across the United States. This includes locations with topographies ranging from the foothills of Denver, Colorado, to the flats of Miami, Florida. This paper includes the results of the statistical analysis performed by NREL and a discussion and detailed summary of the development of the vocational drive cycle weights and representative transient drive cycles for testing and simulation. Additional discussion of known limitations and potential future work is also included.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Sean; Dewan, Leslie; Massie, Mark
This report presents results from a collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear (GAIN) Nuclear Energy Voucher program. The TAP concept is a molten salt reactor using configurable zirconium hydride moderator rod assemblies to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches and time-dependent parametersmore » necessary to simulate the continuously changing physics in this complex system. The implementation of continuous-energy Monte Carlo transport and depletion tools in ChemTriton provide for full-core three-dimensional modeling and simulation. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this concept. Additional analyses of mass feed rates and enrichments, isotopic removals, tritium generation, core power distribution, core vessel helium generation, moderator rod heat deposition, and reactivity coeffcients provide additional information to make informed design decisions. This work demonstrates capabilities of ORNL modeling and simulation tools for neutronic and fuel cycle analysis of molten salt reactor concepts.« less
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
NASA Astrophysics Data System (ADS)
Kwakkel, Jan; Haasnoot, Marjolijn
2015-04-01
In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.
Metric for evaluation of filter efficiency in spectral cameras.
Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani
2016-11-10
Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.
DOT National Transportation Integrated Search
2013-04-01
"This report provides a Quick Guide to the concept of asset sustainability metrics. Such metrics address the long-term performance of highway assets based upon expected expenditure levels. : It examines how such metrics are used in Australia, Britain...
Biometric Subject Verification Based on Electrocardiographic Signals
NASA Technical Reports Server (NTRS)
Dusan, Sorin V. (Inventor); Jorgensen, Charles C. (Inventor)
2014-01-01
A method of authenticating or declining to authenticate an asserted identity of a candidate-person. In an enrollment phase, a reference PQRST heart action graph is provided or constructed from information obtained from a plurality of graphs that resemble each other for a known reference person, using a first graph comparison metric. In a verification phase, a candidate-person asserts his/her identity and presents a plurality of his/her heart cycle graphs. If a sufficient number of the candidate-person's measured graphs resemble each other, a representative composite graph is constructed from the candidate-person's graphs and is compared with a composite reference graph, for the person whose identity is asserted, using a second graph comparison metric. When the second metric value lies in a selected range, the candidate-person's assertion of identity is accepted.
NASA Technical Reports Server (NTRS)
McFarland, Shane M.; Norcross, Jason
2016-01-01
Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.
Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam
2017-02-01
This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.
Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery
Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack
2015-01-01
Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286
Tools and Metrics for Environmental Sustainability
Within the U.S. Environmental Protection Agency’s Office of Research and Development the National Risk Management Research Laboratory has been developing tools to help design and evaluate chemical processes with a life cycle perspective. These tools include the Waste Reduction (...
AN ADVANCED SYSTEM FOR POLLUTION PREVENTION IN CHEMICAL COMPLEXES
One important accomplishment is that the system will give process engineers interactively and simultaneously use of programs for total cost analysis, life cycle assessment and sustainability metrics to provide direction for the optimal chemical complex analysis pro...
Measuring Distribution Performance? Benchmarking Warrants Your Attention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ericson, Sean J; Alvarez, Paul
Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.
National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?
Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N
2017-12-01
To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.
Metrics for Evaluation of Student Models
ERIC Educational Resources Information Center
Pelanek, Radek
2015-01-01
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K
2015-10-01
Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Poisson, Sharon N.; Josephson, S. Andrew
2011-01-01
Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840
Mapping multiple components of malaria risk for improved targeting of elimination interventions.
Cohen, Justin M; Le Menach, Arnaud; Pothin, Emilie; Eisele, Thomas P; Gething, Peter W; Eckhoff, Philip A; Moonen, Bruno; Schapira, Allan; Smith, David L
2017-11-13
There is a long history of considering the constituent components of malaria risk and the malaria transmission cycle via the use of mathematical models, yet strategic planning in endemic countries tends not to take full advantage of available disease intelligence to tailor interventions. National malaria programmes typically make operational decisions about where to implement vector control and surveillance activities based upon simple categorizations of annual parasite incidence. With technological advances, an enormous opportunity exists to better target specific malaria interventions to the places where they will have greatest impact by mapping and evaluating metrics related to a variety of risk components, each of which describes a different facet of the transmission cycle. Here, these components and their implications for operational decision-making are reviewed. For each component, related mappable malaria metrics are also described which may be measured and evaluated by malaria programmes seeking to better understand the determinants of malaria risk. Implementing tailored programmes based on knowledge of the heterogeneous distribution of the drivers of malaria transmission rather than only consideration of traditional metrics such as case incidence has the potential to result in substantial improvements in decision-making. As programmes improve their ability to prioritize their available tools to the places where evidence suggests they will be most effective, elimination aspirations may become increasingly feasible.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?
Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J
2014-02-01
Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.
NASA Astrophysics Data System (ADS)
Escartin, Terenz R.; Nano, Tomi F.; Cunningham, Ian A.
2016-03-01
The detective quantum efficiency (DQE), expressed as a function of spatial frequency, describes the ability of an x-ray detector to produce high signal-to-noise ratio (SNR) images. While regulatory and scientific communities have used the DQE as a primary metric for optimizing detector design, the DQE is rarely used by end users to ensure high system performance is maintained. Of concern is that image quality varies across different systems for the same exposures with no current measures available to describe system performance. Therefore, here we conducted an initial DQE measurement survey of clinical x-ray systems using a DQE-testing instrument to identify their range of performance. Following laboratory validation, experiments revealed that the DQE of five different systems under the same exposure level (8.0 μGy) ranged from 0.36 to 0.75 at low spatial frequencies, and 0.02 to 0.4 at high spatial frequencies (3.5 cycles/mm). Furthermore, the DQE dropped substantially with decreasing detector exposure by a factor of up to 1.5x in the lowest spatial frequency, and a factor of 10x at 3.5 cycles/mm due to the effect of detector readout noise. It is concluded that DQE specifications in purchasing decisions, combined with periodic DQE testing, are important factors to ensure patients receive the health benefits of high-quality images for low x-ray exposures.
Rangom, Yverick; Tang, Xiaowu Shirley; Nazar, Linda F
2015-07-28
We report the fabrication of high-performance, self-standing composite sp(2)-carbon supercapacitor electrodes using single-walled carbon nanotubes (CNTs) as conductive binder. The 3-D mesoporous mesh architecture of CNT-based composite electrodes grants unimpaired ionic transport throughout relatively thick films and allows superior performance compared to graphene-based devices at an ac line frequency of 120 Hz. Metrics of 601 μF/cm(2) with a -81° phase angle and a rate capability (RC) time constant of 199 μs are obtained for thin carbon films. The free-standing carbon films were obtained from a chlorosulfonic acid dispersion and interfaced to stainless steel current collectors with various surface treatments. CNT electrodes were able to cycle at 200 V/s and beyond, still showing a characteristic parallelepipedic cyclic votammetry shape at 1 kV/s. Current densities are measured in excess of 6400 A/g, and the electrodes retain more than 98% capacity after 1 million cycles. These promising results are attributed to a reduction of series resistance in the film through the CNT conductive network and especially to the surface treatment of the stainless steel current collector.
Measuring β-diversity with species abundance data.
Barwell, Louise J; Isaac, Nick J B; Kunin, William E
2015-07-01
In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
An Exploratory Study of OEE Implementation in Indian Manufacturing Companies
NASA Astrophysics Data System (ADS)
Kumar, J.; Soni, V. K.
2015-04-01
Globally, the implementation of Overall equipment effectiveness (OEE) has proven to be highly effective in improving availability, performance rate and quality rate while reducing unscheduled breakdown and wastage that stems from the equipment. This paper investigates the present status and future scope of OEE metrics in Indian manufacturing companies through an extensive survey. In this survey, opinions of Production and Maintenance Managers have been analyzed statistically to explore the relationship between factors, perspective of OEE and potential use of OEE metrics. Although the sample has been divers in terms of product, process type, size, and geographic location of the companies, they are enforced to implement improvement techniques such as OEE metrics to improve performance. The findings reveal that OEE metrics has huge potential and scope to improve performance. Responses indicate that Indian companies are aware of OEE but they are not utilizing full potential of OEE metrics.
A neural net-based approach to software metrics
NASA Technical Reports Server (NTRS)
Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.
1992-01-01
Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.
Metrication report to the Congress
NASA Technical Reports Server (NTRS)
1991-01-01
NASA's principal metrication accomplishments for FY 1990 were establishment of metrication policy for major programs, development of an implementing instruction for overall metric policy and initiation of metrication planning for the major program offices. In FY 1991, development of an overall NASA plan and individual program office plans will be completed, requirement assessments will be performed for all support areas, and detailed assessment and transition planning will be undertaken at the institutional level. Metric feasibility decisions on a number of major programs are expected over the next 18 months.
Review of Jet Fuel Life Cycle Assessment Methods and Sustainability Metrics
DOT National Transportation Integrated Search
2015-12-01
The primary aim of this study is to help aviation jet fuel purchasers (primarily commercial airlines and the U.S. military) to understand the sustainability implications of their jet fuel purchases and provide guidelines for procuring sustainable fue...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neises, T. W.; Wagner, M. J.; Gray, A. K.
Research of advanced power cycles has shown supercritical carbon dioxide power cycles may have thermal efficiency benefits relative to steam cycles at temperatures around 500 - 700 degrees C. To realize these benefits for CSP, it is necessary to increase the maximum outlet temperature of current tower designs. Research at NREL is investigating a concept that uses high-pressure supercritical carbon dioxide as the heat transfer fluid to achieve a 650 degrees C receiver outlet temperature. At these operating conditions, creep becomes an important factor in the design of a tubular receiver and contemporary design assumptions for both solar and traditionalmore » boiler applications must be revisited and revised. This paper discusses lessons learned for high-pressure, high-temperature tubular receiver design. An analysis of a simplified receiver tube is discussed, and the results show the limiting stress mechanisms in the tube and the impact on the maximum allowable flux as design parameters vary. Results of this preliminary analysis indicate an underlying trade-off between tube thickness and the maximum allowable flux on the tube. Future work will expand the scope of design variables considered and attempt to optimize the design based on cost and performance metrics.« less
Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.
Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony
2017-12-01
Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Restaurant Energy Use Benchmarking Guideline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, R.; Smith, V.; Field, K.
2011-07-01
A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.
Improving Department of Defense Global Distribution Performance Through Network Analysis
2016-06-01
network performance increase. 14. SUBJECT TERMS supply chain metrics, distribution networks, requisition shipping time, strategic distribution database...peace and war” (p. 4). USTRANSCOM Metrics and Analysis Branch defines, develops, tracks, and maintains outcomes- based supply chain metrics to...2014a, p. 8). The Joint Staff defines a TDD standard as the maximum number of days the supply chain can take to deliver requisitioned materiel
Tide or Tsunami? The Impact of Metrics on Scholarly Research
ERIC Educational Resources Information Center
Bonnell, Andrew G.
2016-01-01
Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…
NASA Astrophysics Data System (ADS)
Li, T.; Wang, Z.; Peng, J.
2018-04-01
Aboveground biomass (AGB) estimation is critical for quantifying carbon stocks and essential for evaluating carbon cycle. In recent years, airborne LiDAR shows its great ability for highly-precision AGB estimation. Most of the researches estimate AGB by the feature metrics extracted from the canopy height distribution of the point cloud which calculated based on precise digital terrain model (DTM). However, if forest canopy density is high, the probability of the LiDAR signal penetrating the canopy is lower, resulting in ground points is not enough to establish DTM. Then the distribution of forest canopy height is imprecise and some critical feature metrics which have a strong correlation with biomass such as percentiles, maximums, means and standard deviations of canopy point cloud can hardly be extracted correctly. In order to address this issue, we propose a strategy of first reconstructing LiDAR feature metrics through Auto-Encoder neural network and then using the reconstructed feature metrics to estimate AGB. To assess the prediction ability of the reconstructed feature metrics, both original and reconstructed feature metrics were regressed against field-observed AGB using the multiple stepwise regression (MS) and the partial least squares regression (PLS) respectively. The results showed that the estimation model using reconstructed feature metrics improved R2 by 5.44 %, 18.09 %, decreased RMSE value by 10.06 %, 22.13 % and reduced RMSEcv by 10.00 %, 21.70 % for AGB, respectively. Therefore, reconstructing LiDAR point feature metrics has potential for addressing AGB estimation challenge in dense canopy area.
On Railroad Tank Car Puncture Performance: Part I - Considering Metrics
DOT National Transportation Integrated Search
2016-04-12
This paper is the first in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perform...
Tracking occupational hearing loss across global industries: A comparative analysis of metrics
Rabinowitz, Peter M.; Galusha, Deron; McTague, Michael F.; Slade, Martin D.; Wesdock, James C.; Dixon-Ernst, Christine
2013-01-01
Occupational hearing loss is one of the most prevalent occupational conditions; yet, there is no acknowledged international metric to allow comparisons of risk between different industries and regions. In order to make recommendations for an international standard of occupational hearing loss, members of an international industry group (the International Aluminium Association) submitted details of different hearing loss metrics currently in use by members. We compared the performance of these metrics using an audiometric data set for over 6000 individuals working in 10 locations of one member company. We calculated rates for each metric at each location from 2002 to 2006. For comparison, we calculated the difference of observed–expected (for age) binaural high frequency hearing loss (in dB/year) for each location over the same time period. We performed linear regression to determine the correlation between each metric and the observed–expected rate of hearing loss. The different metrics produced discrepant results, with annual rates ranging from 0.0% for a less-sensitive metric to more than 10% for a highly sensitive metric. At least two metrics, a 10 dB age-corrected threshold shift from baseline and a 15 dB nonage-corrected shift metric, correlated well with the difference of observed–expected high-frequency hearing loss. This study suggests that it is feasible to develop an international standard for tracking occupational hearing loss in industrial working populations. PMID:22387709
Do Your Students Measure Up Metrically?
ERIC Educational Resources Information Center
Taylor, P. Mark; Simms, Ken; Kim, Ok-Kyeong; Reys, Robert E.
2001-01-01
Examines released metric items from the Third International Mathematics and Science Study (TIMSS) and the 3rd and 4th grade results. Recommends refocusing instruction on the metric system to improve student performance in measurement. (KHR)
Evaluation of image deblurring methods via a classification metric
NASA Astrophysics Data System (ADS)
Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo
2012-09-01
The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Context and meter enhance long-range planning in music performance
Mathias, Brian; Pfordresher, Peter Q.; Palmer, Caroline
2015-01-01
Neural responses demonstrate evidence of resonance, or oscillation, during the production of periodic auditory events. Music contains periodic auditory events that give rise to a sense of beat, which in turn generates a sense of meter on the basis of multiple periodicities. Metrical hierarchies may aid memory for music by facilitating similarity-based associations among sequence events at different periodic distances that unfold in longer contexts. A fundamental question is how metrical associations arising from a musical context influence memory during music performance. Longer contexts may facilitate metrical associations at higher hierarchical levels more than shorter contexts, a prediction of the range model, a formal model of planning processes in music performance (Palmer and Pfordresher, 2003; Pfordresher et al., 2007). Serial ordering errors, in which intended sequence events are produced in incorrect sequence positions, were measured as skilled pianists performed musical pieces that contained excerpts embedded in long or short musical contexts. Pitch errors arose from metrically similar positions and further sequential distances more often when the excerpt was embedded in long contexts compared to short contexts. Musicians’ keystroke intensities and error rates also revealed influences of metrical hierarchies, which differed for performances in long and short contexts. The range model accounted for contextual effects and provided better fits to empirical findings when metrical associations between sequence events were included. Longer sequence contexts may facilitate planning during sequence production by increasing conceptual similarity between hierarchically associated events. These findings are consistent with the notion that neural oscillations at multiple periodicities may strengthen metrical associations across sequence events during planning. PMID:25628550
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Greenroads : a sustainability performance metric for roadway design and construction.
DOT National Transportation Integrated Search
2009-11-01
Greenroads is a performance metric for quantifying sustainable practices associated with roadway design and construction. Sustainability is defined as having seven key components: ecology, equity, economy, extent, expectations, experience and exposur...
Performance metrics used by freight transport providers.
DOT National Transportation Integrated Search
2008-09-30
The newly-established National Cooperative Freight Research Program (NCFRP) has allocated $300,000 in funding to a project entitled Performance Metrics for Freight Transportation (NCFRP 03). The project is scheduled for completion in September ...
NASA Astrophysics Data System (ADS)
Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.
2014-09-01
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra
2018-01-01
Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial—it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context. PMID:29615890
Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra
2018-01-01
Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial-it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context.
Aircraft Conceptual Design and Risk Analysis Using Physics-Based Noise Prediction
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Mavris, Dimitri N.
2006-01-01
An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid trade-off and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The methodology was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and take-off and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.
Screening and Evaluation Tool (SET) Users Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pincock, Layne
This document is the users guide to using the Screening and Evaluation Tool (SET). SET is a tool for comparing multiple fuel cycle options against a common set of criteria and metrics. It does this using standard multi-attribute utility decision analysis methods.
Gish, Ryan
2002-08-01
Strategic triggers and metrics help healthcare providers achieve financial success. Metrics help assess progress toward long-term goals. Triggers signal market changes requiring a change in strategy. All metrics may not move in concert. Organizations need to identify indicators, monitor performance.
Cognitive context detection in UAS operators using eye-gaze patterns on computer screens
NASA Astrophysics Data System (ADS)
Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph
2016-05-01
In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.
Life cycle assessment of Chinese shrimp farming systems targeted for export and domestic sales.
Cao, Ling; Diana, James S; Keoleian, Gregory A; Lai, Qiuming
2011-08-01
We conducted surveys of six hatcheries and 18 farms for data inputs to complete a cradle-to-farm-gate life cycle assessment (LCA) to evaluate the environmental performance for intensive (for export markets in Chicago) and semi-intensive (for domestic markets in Shanghai) shrimp farming systems in Hainan Province, China. The relative contribution to overall environmental performance of processing and distribution to final markets were also evaluated from a cradle-to-destination-port perspective. Environmental impact categories included global warming, acidification, eutrophication, cumulative energy use, and biotic resource use. Our results indicated that intensive farming had significantly higher environmental impacts per unit production than semi-intensive farming in all impact categories. The grow-out stage contributed between 96.4% and 99.6% of the cradle-to-farm-gate impacts. These impacts were mainly caused by feed production, electricity use, and farm-level effluents. By averaging over intensive (15%) and semi-intensive (85%) farming systems, 1 metric ton (t) live-weight of shrimp production in China required 38.3 ± 4.3 GJ of energy, as well as 40.4 ± 1.7 t of net primary productivity, and generated 23.1 ± 2.6 kg of SO(2) equiv, 36.9 ± 4.3 kg of PO(4) equiv, and 3.1 ± 0.4 t of CO(2) equiv. Processing made a higher contribution to cradle-to-destination-port impacts than distribution of processed shrimp from farm gate to final markets in both supply chains. In 2008, the estimated total electricity consumption, energy consumption, and greenhouse gas emissions from Chinese white-leg shrimp production would be 1.1 billion kW·h, 49 million GJ, and 4 million metric tons, respectively. Improvements suggested for Chinese shrimp aquaculture include changes in feed composition, farm management, electricity-generating sources, and effluent treatment before discharge. Our results can be used to optimize market-oriented shrimp supply chains and promote more sustainable shrimp production and consumption.
Foul tip impact attenuation of baseball catcher masks using head impact metrics
White, Terrance R.; Cutcliffe, Hattie C.; Shridharani, Jay K.; Wood, Garrett W.; Bass, Cameron R.
2018-01-01
Currently, no scientific consensus exists on the relative safety of catcher mask styles and materials. Due to differences in mass and material properties, the style and material of a catcher mask influences the impact metrics observed during simulated foul ball impacts. The catcher surrogate was a Hybrid III head and neck equipped with a six degree of freedom sensor package to obtain linear accelerations and angular rates. Four mask styles were impacted using an air cannon for six 30 m/s and six 35 m/s impacts to the nasion. To quantify impact severity, the metrics peak linear acceleration, peak angular acceleration, Head Injury Criterion, Head Impact Power, and Gadd Severity Index were used. An Analysis of Covariance and a Tukey’s HSD Test were conducted to compare the least squares mean between masks for each head injury metric. For each injury metric a P-Value less than 0.05 was found indicating a significant difference in mask performance. Tukey’s HSD test found for each metric, the traditional style titanium mask fell in the lowest performance category while the hockey style mask was in the highest performance category. Limitations of this study prevented a direct correlation from mask testing performance to mild traumatic brain injury. PMID:29856814
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
Caverzagie, Kelly J; Lane, Susan W; Sharma, Niraj; Donnelly, John; Jaeger, Jeffrey R; Laird-Fick, Heather; Moriarty, John P; Moyer, Darilyn V; Wallach, Sara L; Wardrop, Richard M; Steinmann, Alwin F
2017-12-12
Graduate medical education (GME) in the United States is financed by contributions from both federal and state entities that total over $15 billion annually. Within institutions, these funds are distributed with limited transparency to achieve ill-defined outcomes. To address this, the Institute of Medicine convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society's current and future needs. The resulting report provided several recommendations regarding the oversight and mechanisms of GME funding, including implementation of performance-based GME payments, but did not provide specific details about the content and development of metrics for these payments. To initiate a national conversation about performance-based GME funding, the authors asked: What should GME be held accountable for in exchange for public funding? In answer to this question, the authors propose 17 potential performance-based metrics for GME funding that could inform future funding decisions. Eight of the metrics are described as exemplars to add context and to help readers obtain a deeper understanding of the inherent complexities of performance-based GME funding. The authors also describe considerations and precautions for metric implementation.
The importance of metrics for evaluating scientific performance
NASA Astrophysics Data System (ADS)
Miyakawa, Tsuyoshi
Evaluation of scientific performance is a major factor that determines the behavior of both individual researchers and the academic institutes to which they belong. Because the number of researchers heavily outweighs the number of available research posts, and the competitive funding accounts for an ever-increasing proportion of research budget, some objective indicators of research performance have gained recognition for increasing transparency and openness. It is common practice to use metrics and indices to evaluate a researcher's performance or the quality of their grant applications. Such measures include the number of publications, the number of times these papers are cited and, more recently, the h-index, which measures the number of highly-cited papers the researcher has written. However, academic institutions and funding agencies in Japan have been rather slow to adopt such metrics. In this article, I will outline some of the currently available metrics, and discuss why we need to use such objective indicators of research performance more often in Japan. I will also discuss how to promote the use of metrics and what we should keep in mind when using them, as well as their potential impact on the research community in Japan.
Metrics for Offline Evaluation of Prognostic Performance
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2010-01-01
Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio
2017-08-01
This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Baranowski, D.; Waliser, D. E.; Jiang, X.
2016-12-01
One of the key challenges in subseasonal weather forecasting is the fidelity in representing the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC). In reality both propagating and non-propagating MJO events are observed, but in numerical forecast the latter group largely dominates. For this study, comprehensive model performances are evaluated using metrics that utilize the mean precipitation pattern and the amplitude and phase of the diurnal cycle, with a particular focus on the linkage between a model's local MC variability and its fidelity in representing propagation of the MJO and equatorial Kelvin waves across the MC. Subseasonal to seasonal variability of mean precipitation and its diurnal cycle in 20 year long climate simulations from over 20 general circulation models (GCMs) is examined to benchmark model performance. Our results show that many models struggle to represent the precipitation pattern over complex Maritime Continent terrain. Many models show negative biases of mean precipitation and amplitude of its diurnal cycle; these biases are often larger over land than over ocean. Furthermore, only a handful of models realistically represent the spatial variability of the phase of the diurnal cycle of precipitation. Models tend to correctly simulate the timing of the diurnal maximum of precipitation over ocean during local solar time morning, but fail to acknowledge influence of the land, with the timing of the maximum of precipitation there occurring, unrealistically, at the same time as over ocean. The day-to-day and seasonal variability of the mean precipitation follows observed patterns, but is often unrealistic for the diurnal cycle amplitude. The intraseasonal variability of the amplitude of the diurnal cycle of precipitation is mainly driven by model's ability (or lack of) to produce eastward propagating MJO-like signal. Our results show that many models tend to decrease apparent air-sea contrast in the mean precipitation and diurnal cycle of precipitation patterns over the Maritime Continent. As a result, the complexity of those patterns is heavily smoothed, to such an extent in some models that the Maritime Continent features and imprint is almost unrecognizable relative to the eastern Indian Ocean or Western Pacific.
Guidelines for evaluating performance of oyster habitat restoration
Baggett, Lesley P.; Powers, Sean P.; Brumbaugh, Robert D.; Coen, Loren D.; DeAngelis, Bryan M.; Greene, Jennifer K.; Hancock, Boze T.; Morlock, Summer M.; Allen, Brian L.; Breitburg, Denise L.; Bushek, David; Grabowski, Jonathan H.; Grizzle, Raymond E.; Grosholz, Edwin D.; LaPeyre, Megan K.; Luckenbach, Mark W.; McGraw, Kay A.; Piehler, Michael F.; Westby, Stephanie R.; zu Ermgassen, Philine S. E.
2015-01-01
Restoration of degraded ecosystems is an important societal goal, yet inadequate monitoring and the absence of clear performance metrics are common criticisms of many habitat restoration projects. Funding limitations can prevent adequate monitoring, but we suggest that the lack of accepted metrics to address the diversity of restoration objectives also presents a serious challenge to the monitoring of restoration projects. A working group with experience in designing and monitoring oyster reef projects was used to develop standardized monitoring metrics, units, and performance criteria that would allow for comparison among restoration sites and projects of various construction types. A set of four universal metrics (reef areal dimensions, reef height, oyster density, and oyster size–frequency distribution) and a set of three universal environmental variables (water temperature, salinity, and dissolved oxygen) are recommended to be monitored for all oyster habitat restoration projects regardless of their goal(s). In addition, restoration goal-based metrics specific to four commonly cited ecosystem service-based restoration goals are recommended, along with an optional set of seven supplemental ancillary metrics that could provide information useful to the interpretation of prerestoration and postrestoration monitoring data. Widespread adoption of a common set of metrics with standardized techniques and units to assess well-defined goals not only allows practitioners to gauge the performance of their own projects but also allows for comparison among projects, which is both essential to the advancement of the field of oyster restoration and can provide new knowledge about the structure and ecological function of oyster reef ecosystems.
Modeling the imprint of Milankovitch cycles on early Pleistocene ice volume
NASA Astrophysics Data System (ADS)
Roychowdhury, R.; DeConto, R.; Pollard, D.
2017-12-01
Global climate during Quaternary and Late Pliocene (present-3.1 Ma) is characterized by alternating glacial and interglacial conditions. Several proposed theories associate these cycles with variations in the Earth's orbital configuration. In this study, we attempt to address the anomalously strong obliquity forcing in the Late Pliocene/Early Pleistocene ice volume records (41 kyr world), which stands in sharp contrast to the primary cyclicity of insolation, which is at precessional periods (23 kyr). Model results from GCM simulations show that at low eccentricities (e<0.015), the effect of precession is minimal, and the integrated insolation metrics (such as summer metric, PDD, etc.) vary in-phase between the two hemispheres. At higher eccentricities (e>0.015), precessional response is important, and the insolation metrics vary out-of-phase between the two hemispheres. Using simulations from a GCM-driven ice sheet model, we simulate time continuous ice volume changes from Northern and Southern Hemispheres. Under eccentricities lower than 0.015, ice sheets in both hemispheres respond only to obliquity cycle, and grow and melt together (in-phase). If the ice sheet is simulated with eccentricity higher than 0.015, both hemispheres become more sensitive to precessional variation, and vary out-of-phase with each other, which is consistent with proxy observations from the late Pleistocene glaciations. We use the simulated ice volumes from 2.0 to 1.0 ma to empirically calculate global benthic δ18O variations based on the assumption that relationships between collapse and growth of ice-sheets and sea level is linear and symmetric and that the isotopic signature of the individual ice-sheets has not changed with time. Our modeled global benthic δ18O values are broadly consistent with the paleoclimate proxy records such as the LR04 stack.
Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...
Engineering performance metrics
NASA Astrophysics Data System (ADS)
Delozier, R.; Snyder, N.
1993-03-01
Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.
An Ejector Air Intake Design Method for a Novel Rocket-Based Combined-Cycle Rocket Nozzle
NASA Astrophysics Data System (ADS)
Waung, Timothy S.
Rocket-based combined-cycle (RBCC) vehicles have the potential to reduce launch costs through the use of several different air breathing engine cycles, which reduce fuel consumption. The rocket-ejector cycle, in which air is entrained into an ejector section by the rocket exhaust, is used at flight speeds below Mach 2. This thesis develops a design method for an air intake geometry around a novel RBCC rocket nozzle design for the rocket-ejector engine cycle. This design method consists of a geometry creation step in which a three-dimensional intake geometry is generated, and a simple flow analysis step which predicts the air intake mass flow rate. The air intake geometry is created using the rocket nozzle geometry and eight primary input parameters. The input parameters are selected to give the user significant control over the air intake shape. The flow analysis step uses an inviscid panel method and an integral boundary layer method to estimate the air mass flow rate through the intake geometry. Intake mass flow rate is used as a performance metric since it directly affects the amount of thrust a rocket-ejector can produce. The design method results for the air intake operating at several different points along the subsonic portion of the Ariane 4 flight profile are found to under predict mass flow rate by up to 8.6% when compared to three-dimensional computational fluid dynamics simulations for the same air intake.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
GPS Device Testing Based on User Performance Metrics
DOT National Transportation Integrated Search
2015-10-02
1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs
A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC
Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.; ...
2017-09-20
Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less
A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.
Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less
Rapid Response Risk Assessment in New Project Development
NASA Technical Reports Server (NTRS)
Graber, Robert R.
2010-01-01
A capability for rapidly performing quantitative risk assessments has been developed by JSC Safety and Mission Assurance for use on project design trade studies early in the project life cycle, i.e., concept development through preliminary design phases. A risk assessment tool set has been developed consisting of interactive and integrated software modules that allow a user/project designer to assess the impact of alternative design or programmatic options on the probability of mission success or other risk metrics. The risk and design trade space includes interactive options for selecting parameters and/or metrics for numerous design characteristics including component reliability characteristics, functional redundancy levels, item or system technology readiness levels, and mission event characteristics. This capability is intended for use on any project or system development with a defined mission, and an example project will used for demonstration and descriptive purposes, e.g., landing a robot on the moon. The effects of various alternative design considerations and their impact of these decisions on mission success (or failure) can be measured in real time on a personal computer. This capability provides a high degree of efficiency for quickly providing information in NASA s evolving risk-based decision environment
NASA Astrophysics Data System (ADS)
Rovinelli, Andrea; Guilhem, Yoann; Proudhon, Henry; Lebensohn, Ricardo A.; Ludwig, Wolfgang; Sangid, Michael D.
2017-06-01
Microstructurally small cracks exhibit large variability in their fatigue crack growth rate. It is accepted that the inherent variability in microstructural features is related to the uncertainty in the growth rate. However, due to (i) the lack of cycle-by-cycle experimental data, (ii) the complexity of the short crack growth phenomenon, and (iii) the incomplete physics of constitutive relationships, only empirical damage metrics have been postulated to describe the short crack driving force metric (SCDFM) at the mesoscale level. The identification of the SCDFM of polycrystalline engineering alloys is a critical need, in order to achieve more reliable fatigue life prediction and improve material design. In this work, the first steps in the development of a general probabilistic framework are presented, which uses experimental result as an input, retrieves missing experimental data through crystal plasticity (CP) simulations, and extracts correlations utilizing machine learning and Bayesian networks (BNs). More precisely, experimental results representing cycle-by-cycle data of a short crack growing through a beta-metastable titanium alloy, VST-55531, have been acquired via phase and diffraction contrast tomography. These results serve as an input for FFT-based CP simulations, which provide the micromechanical fields influenced by the presence of the crack, complementing the information available from the experiment. In order to assess the correlation between postulated SCDFM and experimental observations, the data is mined and analyzed utilizing BNs. Results show the ability of the framework to autonomously capture relevant correlations and the equivalence in the prediction capability of different postulated SCDFMs for the high cycle fatigue regime.
The Need for Integrating the Back End of the Nuclear Fuel Cycle in the United States of America
Bonano, Evaristo J.; Kalinina, Elena A.; Swift, Peter N.
2018-02-26
Current practice for commercial spent nuclear fuel management in the United States of America (US) includes storage of spent fuel in both pools and dry storage cask systems at nuclear power plants. Most storage pools are filled to their operational capacity, and management of the approximately 2,200 metric tons of spent fuel newly discharged each year requires transferring older and cooler fuel from pools into dry storage. In the absence of a repository that can accept spent fuel for permanent disposal, projections indicate that the US will have approximately 134,000 metric tons of spent fuel in dry storage by mid-centurymore » when the last plants in the current reactor fleet are decommissioned. Current designs for storage systems rely on large dual-purpose (storage and transportation) canisters that are not optimized for disposal. Various options exist in the US for improving integration of management practices across the entire back end of the nuclear fuel cycle.« less
The Need for Integrating the Back End of the Nuclear Fuel Cycle in the United States of America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonano, Evaristo J.; Kalinina, Elena A.; Swift, Peter N.
Current practice for commercial spent nuclear fuel management in the United States of America (US) includes storage of spent fuel in both pools and dry storage cask systems at nuclear power plants. Most storage pools are filled to their operational capacity, and management of the approximately 2,200 metric tons of spent fuel newly discharged each year requires transferring older and cooler fuel from pools into dry storage. In the absence of a repository that can accept spent fuel for permanent disposal, projections indicate that the US will have approximately 134,000 metric tons of spent fuel in dry storage by mid-centurymore » when the last plants in the current reactor fleet are decommissioned. Current designs for storage systems rely on large dual-purpose (storage and transportation) canisters that are not optimized for disposal. Various options exist in the US for improving integration of management practices across the entire back end of the nuclear fuel cycle.« less
Relevance of motion-related assessment metrics in laparoscopic surgery.
Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J
2013-06-01
Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.
NASA Astrophysics Data System (ADS)
Marshalkin, V. Ye.; Povyshev, V. M.
2017-12-01
It is shown for a closed thorium-uranium-plutonium fuel cycle that, upon processing of one metric ton of irradiated fuel after each four-year campaign, the radioactive wastes contain 54 kg of fission products, 0.8 kg of thorium, 0.10 kg of uranium isotopes, 0.005 kg of plutonium isotopes, 0.002 kg of neptunium, and "trace" amounts of americium and curium isotopes. This qualitatively simplifies the handling of high-level wastes in nuclear power engineering.
ERIC Educational Resources Information Center
Ramanarayanan, Vikram; Lange, Patrick; Evanini, Keelan; Molloy, Hillary; Tsuprun, Eugene; Qian, Yao; Suendermann-Oeft, David
2017-01-01
Predicting and analyzing multimodal dialog user experience (UX) metrics, such as overall call experience, caller engagement, and latency, among other metrics, in an ongoing manner is important for evaluating such systems. We investigate automated prediction of multiple such metrics collected from crowdsourced interactions with an open-source,…
JPDO Portfolio Analysis of NextGen
2009-09-01
runways. C. Metrics The JPDO Interagency Portfolio & Systems Analysis ( IPSA ) division continues to coordinate, develop, and refine the metrics and...targets associated with the NextGen initiatives with the partner agencies & stakeholder communities. IPSA has formulated a set of top-level metrics as...metrics are calculated from system performance measures that constitute outputs of the American Institute of Aeronautics and Astronautics 8 IPSA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald Boring; Roger Lew; Thomas Ulrich
2014-03-01
As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less
Orbit design and optimization based on global telecommunication performance metrics
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.
2006-01-01
The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.
Performance metrics for the assessment of satellite data products: an ocean color case study
Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy
2018-01-01
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Venkat; Das, Trishna
Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24more » bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.« less
Sakieh, Yousef; Salmanmahiny, Abdolrassoul
2016-03-01
Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.
New Screening Test Developed for the Blanching Resistance of Copper Alloys
NASA Technical Reports Server (NTRS)
Thomas-Ogbuji, Linus U.
2004-01-01
NASA's extensive efforts towards more efficient, safer, and more affordable space transportation include the development of new thrust-cell liner materials with improved capabilities and longer lives. For rocket engines fueled with liquid hydrogen, an important metric of liner performance is resistance to blanching, a phenomenon of localized wastage by cycles of oxidation-reduction due to local imbalance in the oxygen-fuel ratio. The current liner of the Space Shuttle Main Engine combustion chamber, a Cu-3Ag-0.5Zr alloy (NARloy-Z) is degraded in service by blanching. Heretofore, evaluating a liner material for blanching resistance involved elaborate and expensive hot-fire tests performed on rocket test stands. To simplify that evaluation, researchers at the NASA Glenn Research Center developed a screening test that uses simple, in situ oxidation-reduction cycling in a thermogravimetric analyzer (TGA). The principle behind this test is that resistance to oxidation or to the reduction of oxide, or both, implies resistance to blanching. Using this test as a preliminary tool to screen alloys for blanching resistance can improve reliability and save time and money. In this test a small polished coupon is hung in a TGA furnace at the desired (service) temperature. Oxidizing and reducing gases are introduced cyclically, in programmed amounts. Cycle durations are chosen by calibration, such that all copper oxides formed by oxidation are fully reduced in the next reduction interval. The sample weight is continuously acquired by the TGA as usual.
Research on quality metrics of wireless adaptive video streaming
NASA Astrophysics Data System (ADS)
Li, Xuefei
2018-04-01
With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.
Snow removal performance metrics : final report.
DOT National Transportation Integrated Search
2017-05-01
This document is the final report for the Clear Roads project entitled Snow Removal Performance Metrics. The project team was led by researchers at Washington State University on behalf of Clear Roads, an ongoing pooled fund research effort focused o...
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F
2015-07-01
Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.
Life Cycle Inventory (LCI) Data-Treatment Chemicals ...
This report estimates environmental emission factors (EmF) for key chemicals, construction and treatment materials, transportation/on-site equipment, and other processes used at remediation sites. The basis for chemical, construction, and treatment material EmFs is life cycle inventory (LCI) data extracted from secondary data sources and compiled using the openLCA software package. The US EPA MOVES 2014 model was used to derive EmFs from combustion profiles for a number of transportation and on-site equipment processes. The EmFs were calculated for use in US EPA’s Spreadsheets for Environmental Footprint Analysis (SEFA). EmFs are reported for cumulative energy demand (CED), global warming potential (GWP), criteria pollutants (e.g. NOX, SOX, and PM10), hazardous air pollutants (HAPs), and water use. Since the USEPA launched its green remediation program, metrics such as impacts, outcomes, and environmental burdens of remediation actions have been difficult to assess. This research includes metrics to quantify RCRA and CERCLA remediation actions. Metrics include: greenhouse gases, energy demand, water use, SOX, NOX, PM10, and hazardous air pollutants. The primary user of this project is EPA's Region 9 Superfund and Technology Office for input into the SEFA tool. SEFA is a set of analytical workbooks used to quantify the environmental footprint of a site cleanup in order to achieve a greener cleanup. SEFA permits users to enter actual or anticipated data on site
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F
2015-07-10
Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.
Data Standardization for Carbon Cycle Modeling: Lessons Learned
NASA Astrophysics Data System (ADS)
Wei, Y.; Liu, S.; Cook, R. B.; Post, W. M.; Huntzinger, D. N.; Schwalm, C.; Schaefer, K. M.; Jacobson, A. R.; Michalak, A. M.
2012-12-01
Terrestrial biogeochemistry modeling is a crucial component of carbon cycle research and provides unique capabilities to understand terrestrial ecosystems. The Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) aims to identify key differences in model formulation that drive observed differences in model predictions of biospheric carbon exchange. To do so, the MsTMIP framework provides standardized prescribed environmental driver data and a standard model protocol to facilitate comparisons of modeling results from nearly 30 teams. Model performance is then evaluated against a variety of carbon-cycle related observations (remote sensing, atmospheric, and flux tower-based observations) using quantitative performance measures and metrics in an integrated evaluation framework. As part of this effort, we have harmonized highly diverse and heterogeneous environmental driver data, model outputs, and observational benchmark data sets to facilitate use and analysis by the MsTMIP team. In this presentation, we will describe the lessons learned from this data-intensive carbon cycle research. The data harmonization activity itself can be made more efficient with the consideration of proper tools, version control, workflow management, and collaboration within the whole team. The adoption of on-demand and interoperable protocols (e.g. OPeNDAP and Open Geospatial Consortium) makes data visualization and distribution more flexible. Users can customize and download data in specific spatial extent, temporal period, and different resolutions. The effort to properly organize data in an open and standard format (e.g. Climate & Forecast compatible netCDF) allows the data to be analysed by a dispersed set of researchers more efficiently, and maximizes the longevity and utilization of the data. The lessons learned from this specific experience can benefit efforts by the broader community to leverage diverse data resources more efficiently in scientific research.
Fuel Cycle Analysis Framework Base Cases for the IAEA/INPRO GAINS Collaborative Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brent Dixon
Thirteen countries participated in the Collaborative Project GAINS “Global Architecture of Innovative Nuclear Energy Systems Based on Thermal and Fast Reactors Including a Closed Fuel Cycle”, which was the primary activity within the IAEA/INPRO Program Area B: “Global Vision on Sustainable Nuclear Energy” for the last three years. The overall objective of GAINS was to develop a standard framework for assessing future nuclear energy systems taking into account sustainable development, and to validate results through sample analyses. This paper details the eight scenarios that constitute the GAINS framework base cases for analysis of the transition to future innovative nuclear energymore » systems. The framework base cases provide a reference for users of the framework to start from in developing and assessing their own alternate systems. Each base case is described along with performance results against the GAINS sustainability evaluation metrics. The eight cases include four using a moderate growth projection and four using a high growth projection for global nuclear electricity generation through 2100. The cases are divided into two sets, addressing homogeneous and heterogeneous scenarios developed by GAINS to model global fuel cycle strategies. The heterogeneous world scenario considers three separate nuclear groups based on their fuel cycle strategies, with non-synergistic and synergistic cases. The framework base case analyses results show the impact of these different fuel cycle strategies while providing references for future users of the GAINS framework. A large number of scenario alterations are possible and can be used to assess different strategies, different technologies, and different assumptions about possible futures of nuclear power. Results can be compared to the framework base cases to assess where these alternate cases perform differently versus the sustainability indicators.« less
Model evaluation using a community benchmarking system for land surface models
NASA Astrophysics Data System (ADS)
Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.
2014-12-01
Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.
Young, Laura K; Love, Gordon D; Smithson, Hannah E
2013-09-20
Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas
2010-04-01
In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.
NASA Astrophysics Data System (ADS)
Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.
2017-12-01
Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.
NASA Astrophysics Data System (ADS)
Marshak, William P.; Darkow, David J.; Wesler, Mary M.; Fix, Edward L.
2000-08-01
Computer-based display designers have more sensory modes and more dimensions within sensory modality with which to encode information in a user interface than ever before. This elaboration of information presentation has made measurement of display/format effectiveness and predicting display/format performance extremely difficult. A multivariate method has been devised which isolates critical information, physically measures its signal strength, and compares it with other elements of the display, which act like background noise. This common Metric relates signal-to-noise ratios (SNRs) within each stimulus dimension, then combines SNRs among display modes, dimensions and cognitive factors can predict display format effectiveness. Examples with their Common Metric assessment and validation in performance will be presented along with the derivation of the metric. Implications of the Common Metric in display design and evaluation will be discussed.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
Loomba, Rohit S; Anderson, Robert H
2018-03-01
Impact factor has been used as a metric by which to gauge scientific journals for several years. A metric meant to describe the performance of a journal overall, impact factor has also become a metric used to gauge individual performance as well. This has held true in the field of pediatric cardiology where many divisions utilize impact factor of journals that an individual has published in to help determine the individual's academic achievement. This subsequently can impact the individual's promotion through the academic ranks. We review the purpose of impact factor, its strengths and weaknesses, discuss why impact factor is not a fair metric to apply to individuals, and offer alternative means by which to gauge individual performance for academic promotion. © 2018 Wiley Periodicals, Inc.
Evaluating true BCI communication rate through mutual information and language models.
Speier, William; Arnold, Corey; Pouratian, Nader
2013-01-01
Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.
On Railroad Tank Car Puncture Performance: Part II - Estimating Metrics
DOT National Transportation Integrated Search
2016-04-12
This paper is the second in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perfor...
Iqbal, Sahar; Mustansar, Tazeen
2017-03-01
Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.
Validation of a Quality Management Metric
2000-09-01
quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2017-01-01
This review brings rigorous life cycle cost (LCC) analysis into discussions about COTS program costs. We gather publicly available cost data, review the data for credibility, check for consistency among sources, and rigorously define and analyze specific cost metrics.
TRACI 2.0 - The Tool for the Reduction and Assessment of Chemical and other environmental Impacts
TRACI 2.0, the Tool for the Reduction and Assessment of Chemical and other environmental Impacts 2.0, has been expanded and developed for sustainability metrics, life cycle impact assessment, industrial ecology, and process design impact assessment for developing increasingly sus...
TRACI 2.1 (the Tool for the Reduction and Assessment of Chemical and other environmental Impacts) has been developed for sustainability metrics, life cycle impact assessment, industrial ecology, and process design impact assessment for developing increasingly sustainable products...
Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie
2018-05-01
In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bubble-Induced Color Doppler Feedback for Histotripsy Tissue Fractionation.
Miller, Ryan M; Zhang, Xi; Maxwell, Adam D; Cain, Charles A; Xu, Zhen
2016-03-01
Histotripsy therapy produces cavitating bubble clouds to increasingly fractionate and eventually liquefy tissue using high-intensity ultrasound pulses. Following cavitation generated by each pulse, coherent motion of the cavitation residual nuclei can be detected using metrics formed from ultrasound color Doppler acquisitions. In this paper, three experiments were performed to investigate the characteristics of this motion as real-time feedback on histotripsy tissue fractionation. In the first experiment, bubble-induced color Doppler (BCD) and particle image velocimetry (PIV) analysis monitored the residual cavitation nuclei in the treatment region in an agarose tissue phantom treated with two-cycle histotripsy pulses at [Formula: see text] using a 500-kHz transducer. Both BCD and PIV results showed brief chaotic motion of the residual nuclei followed by coherent motion first moving away from the transducer and then rebounding back. Velocity measurements from both PIV and BCD agreed well, showing a monotonic increase in rebound time up to a saturation point for increased therapy dose. In a second experiment, a thin layer of red blood cells (RBC) was added to the phantom to allow quantification of the fractionation of the RBC layer to compare with BCD metrics. A strong linear correlation was observed between the fractionation level and the time to BCD peak rebound velocity over histotripsy treatment. Finally, the correlation between BCD feedback and histotripsy tissue fractionation was validated in ex vivo porcine liver evaluated histologically. BCD metrics showed strong linear correlation with fractionation progression, suggesting that BCD provides useful quantitative real-time feedback on histotripsy treatment progression.
Bubble-induced Color Doppler Feedback for Histotripsy Tissue Fractionation
Miller, Ryan M.; Zhang, Xi; Maxwell, Adam; Cain, Charles; Xu, Zhen
2016-01-01
Histotripsy therapy produces cavitating bubble clouds to increasingly fractionate and eventually liquefy tissue using high intensity ultrasound pulses. Following cavitation generated by each pulse, coherent motion of the cavitation residual nuclei can be detected using metrics formed from ultrasound color Doppler acquisitions. In this paper, three experiments were performed to investigate the characteristics of this motion as real-time feedback on histotripsy tissue fractionation. In the first experiment, bubble-induced color Doppler (BCD) and particle image velocimetry (PIV) analysis monitored the residual cavitation nuclei in the treatment region in an agarose tissue phantom treated with 2-cycle histotripsy pulses at > 30 MPa using a 500 kHz transducer. Both BCD and PIV results showed brief chaotic motion of the residual nuclei followed by coherent motion first moving away from the transducer and then rebounding back. Velocity measurements from both PIV and BCD agreed well, showing a monotonic increase in rebound time up to a saturation point for increased therapy dose. In a second experiment, a thin layer of red blood cells (RBC) was added to the phantom to allow quantification of the fractionation of the RBC layer to compare with BCD metrics. A strong linear correlation was observed between the fractionation level and the time to BCD peak rebound velocity over histotripsy treatment. Finally, the correlation between BCD feedback and histotripsy tissue fractionation was validated in ex vivo porcine liver evaluated histologically. BCD metrics showed strong linear correlation with fractionation progression, suggesting that BCD provides useful quantitative real-time feedback on histotripsy treatment progression. PMID:26863659
ERIC Educational Resources Information Center
Travis, James L., III
2014-01-01
This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…
Ocean acidification compromises a planktic calcifier with implications for global carbon cycling.
Davis, Catherine V; Rivest, Emily B; Hill, Tessa M; Gaylord, Brian; Russell, Ann D; Sanford, Eric
2017-05-22
Anthropogenically-forced changes in ocean chemistry at both the global and regional scale have the potential to negatively impact calcifying plankton, which play a key role in ecosystem functioning and marine carbon cycling. We cultured a globally important calcifying marine plankter (the foraminifer, Globigerina bulloides) under an ecologically relevant range of seawater pH (7.5 to 8.3 total scale). Multiple metrics of calcification and physiological performance varied with pH. At pH > 8.0, increased calcification occurred without a concomitant rise in respiration rates. However, as pH declined from 8.0 to 7.5, calcification and oxygen consumption both decreased, suggesting a reduced ability to precipitate shell material accompanied by metabolic depression. Repair of spines, important for both buoyancy and feeding, was also reduced at pH < 7.7. The dependence of calcification, respiration, and spine repair on seawater pH suggests that foraminifera will likely be challenged by future ocean conditions. Furthermore, the nature of these effects has the potential to actuate changes in vertical transport of organic and inorganic carbon, perturbing feedbacks to regional and global marine carbon cycling. The biological impacts of seawater pH have additional, important implications for the use of foraminifera as paleoceanographic indicators.
DOT National Transportation Integrated Search
2016-06-01
Traditional highway safety performance metrics have been largely based on fatal crashes and more recently serious injury crashes. In the near future however, there may be less severe motor vehicle crashes due to advances in driver assistance systems,...
Optimization of planar self-collimating photonic crystals.
Rumpf, Raymond C; Pazos, Javier J
2013-07-01
Self-collimation in photonic crystals has received a lot of attention in the literature, partly due to recent interest in silicon photonics, yet no performance metrics have been proposed. This paper proposes a figure of merit (FOM) for self-collimation and outlines a methodical approach for calculating it. Performance metrics include bandwidth, angular acceptance, strength, and an overall FOM. Two key contributions of this work include the performance metrics and identifying that the optimum frequency for self-collimation is not at the inflection point. The FOM is used to optimize a planar photonic crystal composed of a square array of cylinders. Conclusions are drawn about how the refractive indices and fill fraction of the lattice impact each of the performance metrics. The optimization is demonstrated by simulating two spatially variant self-collimating photonic crystals, where one has a high FOM and the other has a low FOM. This work gives optical designers tremendous insight into how to design and optimize robust self-collimating photonic crystals, which promises many applications in silicon photonics and integrated optics.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
DOT National Transportation Integrated Search
2013-10-01
In a congested urban street network the average traffic speed is an inadequate metric for measuring : speed changes that drivers can perceive from changes in traffic control strategies. : A driver oriented metric is needed. Stop frequency distrib...
Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty
Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.
2016-01-01
Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance. PMID:27152838
Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.
Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F
2016-01-01
Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean
2015-12-01
Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Constrained Metric Learning by Permutation Inducing Isometries.
Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle
2016-01-01
The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
Strategic management system in a healthcare setting--moving from strategy to results.
Devitt, Rob; Klassen, Wolf; Martalog, Julian
2005-01-01
One of the historical challenges in the healthcare system has been the identification and collection of meaningful data to measure an organization's progress towards the achievement of its strategic goals and the concurrent alignment of internal operating practices with this strategy. Over the last 18 months the Toronto East General Hospital (TEGH) has adopted a strategic management system and organizing framework that has led to a metric-based strategic plan. It has allowed for formal and measurable linkages across a full range of internal business processes, from the annual operating plan to resource allocation decisions, to the balanced scorecard and individual performance evaluations. The Strategic Management System (SMS) aligns organizational planning and performance measurement, facilitates an appropriate balance between organizational priorities and resolving "local" problems, and encourages behaviours that are consistent with the values upon which the organization is built. The TEGH Accountability Framework serves as the foundation for the entire system. A key tool of the system is the rolling three-year strategic plan for the organization that sets out specific annual improvement targets on a number of key strategic measures. Individual program/department plans with corresponding measures ensure that the entire organization is moving forward strategically. Each year, all plans are reviewed, with course adjustments made to reflect changes in the hospital's environment and with re-calibration of performance targets for the next three years to ensure continued improvement and organizational progress. This system has been used through one annual business cycle. Results from the past year show measurable success. The hospital has improved on 12 of the 15 strategic plan metrics, including achieving the targeted 1% operating surplus while operating in an environment of tremendous change and uncertainty. This article describes the strategic management system used at TEGH and demonstrates the formal integration of the plan into its operating and decision making processes. It also provides examples of the metrics, their use in decision-making and the variance reporting and improvement mechanisms. The article also demonstrates that a measurement-oriented approach to the planning and delivery of community hospital service is both achievable and valuable in terms of accountability and organizational responsiveness.
Methodology to Calculate the ACE and HPQ Metrics Used in the Wave Energy Prize
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driscoll, Frederick R; Weber, Jochem W; Jenne, Dale S
The U.S. Department of Energy's Wave Energy Prize Competition encouraged the development of innovative deep-water wave energy conversion technologies that at least doubled device performance above the 2014 state of the art. Because levelized cost of energy (LCOE) metrics are challenging to apply equitably to new technologies where significant uncertainty exists in design and operation, the prize technical team developed a reduced metric as proxy for LCOE, which provides an equitable comparison of low technology readiness level wave energy converter (WEC) concepts. The metric is called 'ACE' which is short for the ratio of the average climate capture width tomore » the characteristic capital expenditure. The methodology and application of the ACE metric used to evaluate the performance of the technologies that competed in the Wave Energy Prize are explained in this report.« less
What are the Ingredients of a Scientifically and Policy-Relevant Hydrologic Connectivity Metric?
NASA Astrophysics Data System (ADS)
Ali, G.; English, C.; McCullough, G.; Stainton, M.
2014-12-01
While the concept of hydrologic connectivity is of significant importance to both researchers and policy makers, there is no consensus on how to express it in quantitative terms. This lack of consensus was further exacerbated by recent rulings of the U.S. Supreme Court that rely on the idea of "significant nexuses": critical degrees of landscape connectivity now have to be demonstrated to warrant environmental protection under the Clean Water Act. Several indicators of connectivity have been suggested in the literature, but they are often computationally intensive and require soil water content information, a requirement that makes them inapplicable over large, data-poor areas for which management decisions are needed. Here our objective was to assess the extent to which the concept of connectivity could become more operational by: 1) drafting a list of potential, watershed-scale connectivity metrics; 2) establishing a list of criteria for ranking the performance of those metrics; 3) testing them in various landscapes. Our focus was on a dozen agricultural Prairie watersheds where the interaction between near-level topography, perennial and intermittent streams, pothole wetlands and man-made drains renders the estimation of connectivity difficult. A simple procedure was used to convert RADARSAT images, collected between 1997 and 2011, into binary maps of saturated versus non-saturated areas. Several pattern-based and graph-theoretic metrics were then computed for a dynamic assessment of connectivity. The metrics performance was compared with regards to their sensitivity to antecedent precipitation, their correlation with watershed discharge, and their ability to portray aggregation effects. Results show that no single connectivity metric could satisfy all our performance criteria. Graph-theoretic metrics however seemed to perform better in pothole-dominated watersheds, thus highlighting the need for region-specific connectivity assessment frameworks.
Synergetic Organization in Speech Rhythm
NASA Astrophysics Data System (ADS)
Cummins, Fred
The Speech Cycling Task is a novel experimental paradigm developed together with Robert Port and Keiichi Tajima at Indiana University. In a task of this sort, subjects repeat a phrase containing multiple prominent, or stressed, syllables in time with an auditory metronome, which can be simple or complex. A phase-based collective variable is defined in the acoustic speech signal. This paper reports on two experiments using speech cycling which together reveal many of the hallmarks of hierarchically coupled oscillatory processes. The first experiment requires subjects to place the final stressed syllable of a small phrase at specified phases within the overall Phrase Repetition Cycle (PRC). It is clearly demonstrated that only three patterns, characterized by phases around 1/3, 1/2 or 2/3 are reliably produced, and these points are attractors for other target phases. The system is thus multistable, and the attractors correspond to stable couplings between the metrical foot and the PRC. A second experiment examines the behavior of these attractors at increased rates. Faster rates lead to mode jumps between attractors. Previous experiments have also illustrated hysteresis as the system moves from one mode to the next. The dynamical organization is particularly interesting from a modeling point of view, as there is no single part of the speech production system which cycles at the level of either the metrical foot or the phrase repetition cycle. That is, there is no continuous kinematic observable in the system. Nonetheless, there is strong evidence that the oscopic behavior of the entire production system is correctly described as hierarchically coupled oscillators. There are many parallels between this organization and the forms of inter-limb coupling observed in locomotion and rhythmic manual tasks.
A general theory of multimetric indices and their properties
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William
2012-01-01
1. Stewardship of biological and ecological resources requires the ability to make integrative assessments of ecological integrity. One of the emerging methods for making such integrative assessments is multimetric indices (MMIs). These indices synthesize data, often from multiple levels of biological organization, with the goal of deriving a single index that reflects the overall effects of human disturbance. Despite the widespread use of MMIs, there is uncertainty about why this approach can be effective. An understanding of MMIs requires a quantitative theory that illustrates how the properties of candidate metrics relates to MMIs generated from those metrics. 2. We present the initial basis for such a theory by deriving the general mathematical characteristics of MMIs assembled from metrics. We then use the theory to derive quantitative answers to the following questions: Is there an optimal number of metrics to comprise an index? How does covariance among metrics affect the performance of the index derived from those metrics? And what are the criteria to decide whether a given metric will improve the performance of an index? 3. We find that the optimal number of metrics to be included in an index depends on the theoretical distribution of signal of the disturbance gradient contained in each metric. For example, if the rank-ordered parameters of a metric-disturbance regression can be described by a monotonically decreasing function, then an optimum number of metrics exists and can often be derived analytically. We derive the conditions by which adding a given metric can be expected to improve an index. 4. We find that the criterion defining such conditions depends nonlinearly of the signal of the disturbance gradient, the noise (error) of the metric and the correlation of the metric errors. Importantly, we find that correlation among metric errors increases the signal required for the metric to improve the index. 5. The theoretical framework presented in this study provides the basis for understanding the properties of MMIs. It can also be useful throughout the index construction process. Specifically, it can be used to aid understanding of the benefits and limitations of combining metrics into indices; it can inform selection/collection of candidate metrics; and it can be used directly as a decision aid in effective index construction.
NASA Astrophysics Data System (ADS)
Acero, Juan A.; Arrizabalaga, Jon
2018-01-01
Urban areas are known to modify meteorological variables producing important differences in small spatial scales (i.e. microscale). These affect human thermal comfort conditions and the dispersion of pollutants, especially those emitted inside the urban area, which finally influence quality of life and the use of public open spaces. In this study, the diurnal evolution of meteorological variables measured in four urban spaces is compared with the results provided by ENVI-met (v 4.0). Measurements were carried out during 3 days with different meteorological conditions in Bilbao in the north of the Iberian Peninsula. The evaluation of the model accuracy (i.e. the degree to which modelled values approach measured values) was carried out with several quantitative difference metrics. The results for air temperature and humidity show a good agreement of measured and modelled values independently of the regional meteorological conditions. However, in the case of mean radiant temperature and wind speed, relevant differences are encountered highlighting the limitation of the model to estimate these meteorological variables precisely during diurnal cycles, in the considered evaluation conditions (sites and weather).
Harclerode, Melissa A; Macbeth, Tamzen W; Miller, Michael E; Gurr, Christopher J; Myers, Teri S
2016-12-15
As the environmental remediation industry matures, remaining sites often have significant underlying technical challenges and financial constraints. More often than not, significant remediation efforts at these "complex" sites have not achieved stringent, promulgated cleanup goals. Decisions then have to be made about whether and how to commit additional resources towards achieving those goals, which are often not achievable nor required to protect receptors. Guidance on cleanup approaches focused on evaluating and managing site-specific conditions and risks, rather than uniformly meeting contaminant cleanup criteria in all media, is available to aid in this decision. Although these risk-based cleanup approaches, such as alternative endpoints and adaptive management strategies, have been developed, they are under-utilized due to environmental, socio-economic, and risk perception barriers. Also, these approaches are usually implemented late in the project life cycle after unsuccessful remedial attempts to achieve stringent cleanup criteria. In this article, we address these barriers by developing an early decision framework to identify if site characteristics support sustainable risk management, and develop performance metrics and tools to evaluate and implement successful risk-based cleanup approaches. In addition, we address uncertainty and risk perception challenges by aligning risk-based cleanup approaches with the concepts of risk management and sustainable remediation. This approach was developed in the context of lessons learned from implementing remediation at complex sites, but as a framework can, and should, be applied to all sites undergoing remediation. Copyright © 2016 Elsevier Ltd. All rights reserved.
The psychometrics of mental workload: multiple measures are sensitive but divergent.
Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian
2015-02-01
A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.
Effects of Electric Vehicle Fast Charging on Battery Life and Vehicle Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Shirk; Jeffrey Wishart
2015-04-01
As part of the U.S. Department of Energy’s Advanced Vehicle Testing Activity, four new 2012 Nissan Leaf battery electric vehicles were instrumented with data loggers and operated over a fixed on-road test cycle. Each vehicle was operated over the test route, and charged twice daily. Two vehicles were charged exclusively by AC level 2 EVSE, while two were exclusively DC fast charged with a 50 kW charger. The vehicles were performance tested on a closed test track when new, and after accumulation of 50,000 miles. The traction battery packs were removed and laboratory tested when the vehicles were new, andmore » at 10,000-mile intervals. Battery tests include constant-current discharge capacity, electric vehicle pulse power characterization test, and low peak power tests. The on-road testing was carried out through 70,000 miles, at which point the final battery tests were performed. The data collected over 70,000 miles of driving, charging, and rest are analyzed, including the resulting thermal conditions and power and cycle demands placed upon the battery. Battery performance metrics including capacity, internal resistance, and power capability obtained from laboratory testing throughout the test program are analyzed. Results are compared within and between the two groups of vehicles. Specifically, the impacts on battery performance, as measured by laboratory testing, are explored as they relate to battery usage and variations in conditions encountered, with a primary focus on effects due to the differences between AC level 2 and DC fast charging. The contrast between battery performance degradation and the effect on vehicle performance is also explored.« less
Strategies for a better performance of RPL under mobility in wireless sensor networks
NASA Astrophysics Data System (ADS)
Latib, Z. A.; Jamil, A.; Alduais, N. A. M.; Abdullah, J.; Audah, L. H. M.; Alias, R.
2017-09-01
A Wireless Sensor Network (WSN) is usually stationary, which the network comprises of static nodes. The increase demand for mobility in various applications such as environmental monitoring, medical, home automation, and military, raises the question how IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) would perform under these mobility applications. This paper aims to understand performance of RPL and come out with strategies for a better performance of RPL in mobility scenarios. Because of this, this paper evaluates the performance of the RPL protocol under three different scenarios: sink and sensor nodes are static, static sink and mobile sensor nodes, and sink and sensor nodes are mobile. The network scenarios are implemented in Cooja simulator. A WSN consists of 25 sensor nodes and one sink node is configured in the simulation environment. The simulation is varied over different packet rates and ContikiMAC's Clear Channel Assessment (CCA) rate. As the performance metric, RPL is evaluated in term of packet delivery ratio (PDR), power consumption and packet rates. The simulation results show RPL provides a poor PDR in the mobility scenarios when compared to the static scenario. In addition, RPL consumes more power and increases duty-cycle rate to support mobility when compared to the static scenario. Based on the findings, we suggest three strategies for a better performance of RPL in mobility scenarios. First, RPL should operates at a lower packet rates when implemented in the mobility scenarios. Second, RPL should be implemented with a higher duty-cycle rate. Lastly, the sink node should be positioned as much as possible in the center of the mobile network.
Goldberg, D; Kallan, M J; Fu, L; Ciccarone, M; Ramirez, J; Rosenberg, P; Arnold, J; Segal, G; Moritsugu, K P; Nathan, H; Hasz, R; Abt, P L
2017-12-01
The shortage of deceased-donor organs is compounded by donation metrics that fail to account for the total pool of possible donors, leading to ambiguous donor statistics. We sought to assess potential metrics of organ procurement organizations (OPOs) utilizing data from the Nationwide Inpatient Sample (NIS) from 2009-2012 and State Inpatient Databases (SIDs) from 2008-2014. A possible donor was defined as a ventilated inpatient death ≤75 years of age, without multi-organ system failure, sepsis, or cancer, whose cause of death was consistent with organ donation. These estimates were compared to patient-level data from chart review from two large OPOs. Among 2,907,658 inpatient deaths from 2009-2012, 96,028 (3.3%) were a "possible deceased-organ donor." The two proposed metrics of OPO performance were: (1) donation percentage (percentage of possible deceased-donors who become actual donors; range: 20.0-57.0%); and (2) organs transplanted per possible donor (range: 0.52-1.74). These metrics allow for comparisons of OPO performance and geographic-level donation rates, and identify areas in greatest need of interventions to improve donation rates. We demonstrate that administrative data can be used to identify possible deceased donors in the US and could be a data source for CMS to implement new OPO performance metrics in a standardized fashion. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery
NASA Technical Reports Server (NTRS)
Le Vie, Lisa R.
2016-01-01
Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.
An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN
Piao, Xianglan; Qiu, Tie
2014-01-01
WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379
An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.
Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie
2014-01-01
WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
DOT National Transportation Integrated Search
2007-01-03
This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...
75 FR 14588 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-26
... progress, jobs created and retained, spend rates and performance metrics under the American Recovery and... information that DOE is developing to collect data on the status of activities, project progress, jobs created and retained, spend rates and performance metrics under the American Recovery and Reinvestment Act of...
Stronger by Degrees: 2012-13 Accountability Report
ERIC Educational Resources Information Center
Kentucky Council on Postsecondary Education, 2014
2014-01-01
The annual "Accountability Report" produced by the Council on Postsecondary Education highlights the system's performance on the state-level metrics included in "Stronger by Degrees: A Strategic Agenda for Kentucky Postsecondary and Adult Education." For each metric, we outline steps taken to improve performance, as well as…
NASA Technical Reports Server (NTRS)
Kegelman, Jerome T.
1998-01-01
The advantage of managing organizations to minimize product development cycle time has been well established. This paper provides an overview of the wind tunnel testing cycle time reduction activities at Langley Research Center (LaRC) and gives the status of several improvements in the wind tunnel productivity and cost reductions that have resulted from these activities. Processes have been examined and optimized. Metric data from monitoring processes provides guidance for investments in advanced technologies. The most promising technologies under implementation today include the use of formally designed experiments, a diverse array of quick disconnect technology and the judicious use of advanced electronic and information technologies.
1982-03-01
pilot systems. Magnitude of the mutant error is classified as: o Program does not compute. o Program computes but does not run test data. o Program...14 Test and Integration ... ............ .. 105 15 The Mapping of SQM to the SDLC ........ ... 108 16 ADS Development .... .............. . 224 17...and funds. While the test phase concludes the normal development cycle, one should realize that with software the development continues in the
The balanced scorecard: sustainable performance assessment for forensic laboratories.
Houck, Max; Speaker, Paul J; Fleming, Arron Scott; Riley, Richard A
2012-12-01
The purpose of this article is to introduce the concept of the balanced scorecard into the laboratory management environment. The balanced scorecard is a performance measurement matrix designed to capture financial and non-financial metrics that provide insight into the critical success factors for an organization, effectively aligning organization strategy to key performance objectives. The scorecard helps organizational leaders by providing balance from two perspectives. First, it ensures an appropriate mix of performance metrics from across the organization to achieve operational excellence; thereby the balanced scorecard ensures that no single or limited group of metrics dominates the assessment process, possibly leading to long-term inferior performance. Second, the balanced scorecard helps leaders offset short term performance pressures by giving recognition and weight to long-term laboratory needs that, if not properly addressed, might jeopardize future laboratory performance. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
2016-03-01
Performance Metrics University of Waterloo Permanganate Treatment of an Emplaced DNAPL Source (Thomson et al., 2007) Table 5.6 Remediation Performance Data... permanganate vs. peroxide/Fenton’s for chemical oxidation). Poorer performance was generally observed when the Total CVOC was the contaminant metric...using a soluble carbon substrate (lactate), chemical oxidation using Fenton’s reagent, and chemical oxidation using potassium permanganate . At
Chang, M-C Oliver; Shields, J Erin
2017-06-01
To reliably measure at the low particulate matter (PM) levels needed to meet California's Low Emission Vehicle (LEV III) 3- and 1-mg/mile particulate matter (PM) standards, various approaches other than gravimetric measurement have been suggested for testing purposes. In this work, a feasibility study of solid particle number (SPN, d50 = 23 nm) and black carbon (BC) as alternatives to gravimetric PM mass was conducted, based on the relationship of these two metrics to gravimetric PM mass, as well as the variability of each of these metrics. More than 150 Federal Test Procedure (FTP-75) or Supplemental Federal Test Procedure (US06) tests were conducted on 46 light-duty vehicles, including port-fuel-injected and direct-injected gasoline vehicles, as well as several light-duty diesel vehicles equipped with diesel particle filters (LDD/DPF). For FTP tests, emission variability of gravimetric PM mass was found to be slightly less than that of either SPN or BC, whereas the opposite was observed for US06 tests. Emission variability of PM mass for LDD/DPF was higher than that of both SPN and BC, primarily because of higher PM mass measurement uncertainties (background and precision) near or below 0.1 mg/mile. While strong correlations were observed from both SPN and BC to PM mass, the slopes are dependent on engine technologies and driving cycles, and the proportionality between the metrics can vary over the course of the test. Replacement of the LEV III PM mass emission standard with one other measurement metric may imperil the effectiveness of emission reduction, as a correlation-based relationship may evolve over future technologies for meeting stringent greenhouse standards. Solid particle number and black carbon were suggested in place of PM mass for the California LEV III 1-mg/mile FTP standard. Their equivalence, proportionality, and emission variability in comparison to PM mass, based on a large light-duty vehicle fleet examined, are dependent on engine technologies and driving cycles. Such empirical derived correlations exhibit the limitation of using these metrics for enforcement and certification standards as vehicle combustion and after-treatment technologies advance.
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud
2013-09-01
The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
Raghubar, Kimberly P; Lamba, Michael; Cecil, Kim M; Yeates, Keith Owen; Mahone, E Mark; Limke, Christina; Grosshans, David; Beckwith, Travis J; Ris, M Douglas
2018-06-01
Advances in radiation treatment (RT), specifically volumetric planning with detailed dose and volumetric data for specific brain structures, have provided new opportunities to study neurobehavioral outcomes of RT in children treated for brain tumor. The present study examined the relationship between biophysical and physical dose metrics and neurocognitive ability, namely learning and memory, 2 years post-RT in pediatric brain tumor patients. The sample consisted of 26 pediatric patients with brain tumor, 14 of whom completed neuropsychological evaluations on average 24 months post-RT. Prescribed dose and dose-volume metrics for specific brain regions were calculated including physical metrics (i.e., mean dose and maximum dose) and biophysical metrics (i.e., integral biological effective dose and generalized equivalent uniform dose). We examined the associations between dose-volume metrics (whole brain, right and left hippocampus), and performance on measures of learning and memory (Children's Memory Scale). Biophysical dose metrics were highly correlated with the physical metric of mean dose but not with prescribed dose. Biophysical metrics and mean dose, but not prescribed dose, correlated with measures of learning and memory. These preliminary findings call into question the value of prescribed dose for characterizing treatment intensity; they also suggest that biophysical dose has only a limited advantage compared to physical dose when calculated for specific regions of the brain. We discuss the implications of the findings for evaluating and understanding the relation between RT and neurocognitive functioning. © 2018 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
King, Donald W.; Boyce, Peter B.; Montgomery, Carol Hansen; Tenopir, Carol
2003-01-01
Focuses on library economic metrics, and presents a conceptual framework for library economic metrics including service input and output, performance, usage, effectiveness, outcomes, impact, and cost and benefit comparisons. Gives examples of these measures for comparison of library electronic and print collections and collection services.…
Synchronization of multi-agent systems with metric-topological interactions.
Wang, Lin; Chen, Guanrong
2016-09-01
A hybrid multi-agent systems model integrating the advantages of both metric interaction and topological interaction rules, called the metric-topological model, is developed. This model describes planar motions of mobile agents, where each agent can interact with all the agents within a circle of a constant radius, and can furthermore interact with some distant agents to reach a pre-assigned number of neighbors, if needed. Some sufficient conditions imposed only on system parameters and agent initial states are presented, which ensure achieving synchronization of the whole group of agents. It reveals the intrinsic relationships among the interaction range, the speed, the initial heading, and the density of the group. Moreover, robustness against variations of interaction range, density, and speed are investigated by comparing the motion patterns and performances of the hybrid metric-topological interaction model with the conventional metric-only and topological-only interaction models. Practically in all cases, the hybrid metric-topological interaction model has the best performance in the sense of achieving highest frequency of synchronization, fastest convergent rate, and smallest heading difference.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
...- peak hours; and (4) additional information on equipment types affected and kV of lines affected. Items... Regulatory Commission (Commission or FERC), among other actions, work with regional transmission... Average burden FERC-922 requirements respondents responses per hours per Total annual annually respondent...
Beyond Benchmarking: Value-Adding Metrics
ERIC Educational Resources Information Center
Fitz-enz, Jac
2007-01-01
HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…
The Consequences of Using One Assessment System to Pursue Two Objectives
ERIC Educational Resources Information Center
Neal, Derek
2013-01-01
Education officials often use one assessment system both to create measures of student achievement and to create performance metrics for educators. However, modern standardized testing systems are not designed to produce performance metrics for teachers or principals. They are designed to produce reliable measures of individual student achievement…
Algal bioassessment metrics for wadeable streams and rivers of Maine, USA
Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth
2011-01-01
Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.
Early Warning Look Ahead Metrics: The Percent Milestone Backlog Metric
NASA Technical Reports Server (NTRS)
Shinn, Stephen A.; Anderson, Timothy P.
2017-01-01
All complex development projects experience delays and corresponding backlogs of their project control milestones during their acquisition lifecycles. NASA Goddard Space Flight Center (GSFC) Flight Projects Directorate (FPD) teamed with The Aerospace Corporation (Aerospace) to develop a collection of Early Warning Look Ahead metrics that would provide GSFC leadership with some independent indication of the programmatic health of GSFC flight projects. As part of the collection of Early Warning Look Ahead metrics, the Percent Milestone Backlog metric is particularly revealing, and has utility as a stand-alone execution performance monitoring tool. This paper describes the purpose, development methodology, and utility of the Percent Milestone Backlog metric. The other four Early Warning Look Ahead metrics are also briefly discussed. Finally, an example of the use of the Percent Milestone Backlog metric in providing actionable insight is described, along with examples of its potential use in other commodities.
Cradle-to-grave life cycle assessment of syngas electricity from woody biomass residues
Hongmei Gu; Richard Bergman
2017-01-01
Forest restoration and fire suppression activities in the western United States have resulted in large volumes of low-to-no value residues. An environmental assessment would enable greater use while maintaining environmental sustainability of these residues for energy products. One internationally accepted sustainable metric tool that can assess environmental impacts...
Optimizing product life cycle processes in design phase
NASA Astrophysics Data System (ADS)
Faneye, Ola. B.; Anderl, Reiner
2002-02-01
Life cycle concepts do not only serve as basis in assisting product developers understand the dependencies between products and their life cycles, they also help in identifying potential opportunities for improvement in products. Common traditional concepts focus mainly on energy and material flow across life phases, necessitating the availability of metrics derived from a reference product. Knowledge of life cycle processes won from an existing product is directly reused in its redesign. Depending on sales volume nevertheless, the environmental impact before product optimization can be substantial. With modern information technologies today, computer-aided life cycle methodologies can be applied well before product use. On the basis of a virtual prototype, life cycle processes are analyzed and optimized, using simulation techniques. This preventive approach does not only help in minimizing (or even eliminating) environmental burdens caused by product, costs incurred due to changes in real product can also be avoided. The paper highlights the relationship between product and life cycle and presents a computer-based methodology for optimizing the product life cycle during design, as presented by SFB 392: Design for Environment - Methods and Tools at Technical University, Darmstadt.
Efficiency improvements of offline metrology job creation
NASA Astrophysics Data System (ADS)
Zuniga, Victor J.; Carlson, Alan; Podlesny, John C.; Knutrud, Paul C.
1999-06-01
Progress of the first lot of a new design through the production line is watched very closely. All performance metrics, cycle-time, in-line measurement results and final electrical performance are critical. Rapid movement of this lot through the line has serious time-to-market implications. Having this material waiting at a metrology operation for an engineer to create a measurement job plan wastes valuable turnaround time. Further, efficient use of a metrology system is compromised by the time required to create and maintain these measurement job plans. Thus, having a method to develop metrology job plans prior to the actual running of the material through the manufacture area can significantly improve both cycle time and overall equipment efficiency. Motorola and Schlumberger have worked together to develop and test such a system. The Remote Job Generator (RJG) created job plans for new device sin a manufacturing process from an NT host or workstation, offline. This increases available system tim effort making production measurements, decreases turnaround time on job plan creation and editing, and improves consistency across job plans. Most importantly this allows job plans for new devices to be available before the first wafers of the device arrive at the tool for measurement. The software also includes a database manager which allows updates of existing job plans to incorporate measurement changes required by process changes or measurement optimization. This paper will review the result of productivity enhancements through the increased metrology utilization and decreased cycle time associated with the use of RJG. Finally, improvements in process control through better control of Job Plans across different devices and layers will be discussed.
Research and development on performance models of thermal imaging systems
NASA Astrophysics Data System (ADS)
Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan
2009-07-01
Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.
Software risk management through independent verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Zhou, Tong C.; Wood, Ralph
1995-01-01
Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.
Effects of Nutrient Enrichment on Microbial Communities and Carbon Cycling in Wetland Soils
NASA Astrophysics Data System (ADS)
Hartman, W.; Neubauer, S. C.; Richardson, C. J.
2013-12-01
Soil microbial communities are responsible for catalyzing biogeochemical transformations underlying critical wetland functions, including cycling of carbon (C) and nutrients, and emissions of greenhouse gasses (GHG). Alteration of nutrient availability in wetland soils may commonly occur as the result of anthropogenic impacts including runoff from human land uses in uplands, alteration of hydrology, and atmospheric deposition. However, the impacts of altered nutrient availability on microbial communities and carbon cycling in wetland soils are poorly understood. To assess these impacts, soil microbial communities and carbon cycling were determined in replicate experimental nutrient addition plots (control, +N, +P, +NP) across several wetland types, including pocosin peat bogs (NC), freshwater tidal marshes (GA), and tidal salt marshes (SC). Microbial communities were determined by pyrosequencing (Roche 454) extracted soil DNA, targeting both bacteria (16S rDNA) and fungi (LSU) at a depth of ca. 1000 sequences per plot. Wetland carbon cycling was evaluated using static chambers to determine soil GHG fluxes, and plant inclusion chambers were used to determine ecosystem C cycling. Soil bacterial communities responded to nutrient addition treatments in freshwater and tidal marshes, while fungal communities did not respond to treatments in any of our sites. We also compared microbial communities to continuous biogeochemical variables in soil, and found that bacterial community composition was correlated only with the content and availability of soil phosphorus, while fungi responded to phosphorus stoichiometry and soil pH. Surprisingly, we did not find a significant effect of our nutrient addition treatments on most metrics of carbon cycling. However, we did find that several metrics of soil carbon cycling appeared much more related to soil phosphorus than to nitrogen or soil carbon pools. Finally, while overall microbial community composition was weakly correlated with soil carbon cycling, our work did identify a small number of individual taxonomic groups that were more strongly correlated with soil CO2 flux. These results suggest that a small number of microbial groups may potentially serve as keystone taxa (and functional indicators), which simple community fingerprinting approaches may overlook. Our results also demonstrate strong effects of soil phosphorus availability on both microbial communities and soil carbon cycling, even in wetland types traditionally considered to be nitrogen limited.
AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F
2015-01-01
Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Utility of Satellite Remote Sensing for Land-Atmosphere Coupling and Drought Metrics
NASA Technical Reports Server (NTRS)
Roundy, Joshua K.; Santanello, Joseph A.
2017-01-01
Feedbacks between the land and the atmosphere can play an important role in the water cycle and a number of studies have quantified Land-Atmosphere (L-A) interactions and feedbacks through observations and prediction models. Due to the complex nature of L-A interactions, the observed variables are not always available at the needed temporal and spatial scales. This work derives the Coupling Drought Index (CDI) solely from satellite data and evaluates the input variables and the resultant CDI against in-situ data and reanalysis products. NASA's AQUA satellite and retrievals of soil moisture and lower tropospheric temperature and humidity properties are used as input. Overall, the AQUA-based CDI and its inputs perform well at a point, spatially, and in time (trends) compared to in-situ and reanalysis products. In addition, this work represents the first time that in-situ observations were utilized for the coupling classification and CDI. The combination of in-situ and satellite remote sensing CDI is unique and provides an observational tool for evaluating models at local and large scales. Overall, results indicate that there is sufficient information in the signal from simultaneous measurements of the land and atmosphere from satellite remote sensing to provide useful information for applications of drought monitoring and coupling metrics.
Utility of Satellite Remote Sensing for Land-Atmosphere Coupling and Drought Metrics
Roundy, Joshua K.; Santanello, Joseph A.
2018-01-01
Feedbacks between the land and the atmosphere can play an important role in the water cycle and a number of studies have quantified Land-Atmosphere (L-A) interactions and feedbacks through observations and prediction models. Due to the complex nature of L-A interactions, the observed variables are not always available at the needed temporal and spatial scales. This work derives the Coupling Drought Index (CDI) solely from satellite data and evaluates the input variables and the resultant CDI against in-situ data and reanalysis products. NASA’s AQUA satellite and retrievals of soil moisture and lower tropospheric temperature and humidity properties are used as input. Overall, the AQUA-based CDI and its inputs perform well at a point, spatially, and in time (trends) compared to in-situ and reanalysis products. In addition, this work represents the first time that in-situ observations were utilized for the coupling classification and CDI. The combination of in-situ and satellite remote sensing CDI is unique and provides an observational tool for evaluating models at local and large scales. Overall, results indicate that there is sufficient information in the signal from simultaneous measurements of the land and atmosphere from satellite remote sensing to provide useful information for applications of drought monitoring and coupling metrics. PMID:29645012
NASA Technical Reports Server (NTRS)
Hochhalter, Jake D.; Littlewood, David J.; Christ, Robert J., Jr.; Veilleux, M. G.; Bozek, J. E.; Ingraffea, A. R.; Maniatty, Antionette M.
2010-01-01
The objective of this paper is to develop further a framework for computationally modeling microstructurally small fatigue crack growth in AA 7075-T651 [1]. The focus is on the nucleation event, when a crack extends from within a second-phase particle into a surrounding grain, since this has been observed to be an initiating mechanism for fatigue crack growth in this alloy. It is hypothesized that nucleation can be predicted by computing a non-local nucleation metric near the crack front. The hypothesis is tested by employing a combination of experimentation and nite element modeling in which various slip-based and energy-based nucleation metrics are tested for validity, where each metric is derived from a continuum crystal plasticity formulation. To investigate each metric, a non-local procedure is developed for the calculation of nucleation metrics in the neighborhood of a crack front. Initially, an idealized baseline model consisting of a single grain containing a semi-ellipsoidal surface particle is studied to investigate the dependence of each nucleation metric on lattice orientation, number of load cycles, and non-local regularization method. This is followed by a comparison of experimental observations and computational results for microstructural models constructed by replicating the observed microstructural geometry near second-phase particles in fatigue specimens. It is found that orientation strongly influences the direction of slip localization and, as a result, in uences the nucleation mechanism. Also, the baseline models, replication models, and past experimental observation consistently suggest that a set of particular grain orientations is most likely to nucleate fatigue cracks. It is found that a continuum crystal plasticity model and a non-local nucleation metric can be used to predict the nucleation event in AA 7075-T651. However, nucleation metric threshold values that correspond to various nucleation governing mechanisms must be calibrated.
Dynamic allocation of attention to metrical and grouping accents in rhythmic sequences.
Kung, Shu-Jen; Tzeng, Ovid J L; Hung, Daisy L; Wu, Denise H
2011-04-01
Most people find it easy to perform rhythmic movements in synchrony with music, which reflects their ability to perceive the temporal periodicity and to allocate attention in time accordingly. Musicians and non-musicians were tested in a click localization paradigm in order to investigate how grouping and metrical accents in metrical rhythms influence attention allocation, and to reveal the effect of musical expertise on such processing. We performed two experiments in which the participants were required to listen to isochronous metrical rhythms containing superimposed clicks and then to localize the click on graphical and ruler-like representations with and without grouping structure information, respectively. Both experiments revealed metrical and grouping influences on click localization. Musical expertise improved the precision of click localization, especially when the click coincided with a metrically strong beat. Critically, although all participants located the click accurately at the beginning of an intensity group, only musicians located it precisely when it coincided with a strong beat at the end of the group. Removal of the visual cue of grouping structures enhanced these effects in musicians and reduced them in non-musicians. These results indicate that musical expertise not only enhances attention to metrical accents but also heightens sensitivity to perceptual grouping.
Resilience-based performance metrics for water resources management under uncertainty
NASA Astrophysics Data System (ADS)
Roach, Tom; Kapelan, Zoran; Ledbetter, Ralph
2018-06-01
This paper aims to develop new, resilience type metrics for long-term water resources management under uncertain climate change and population growth. Resilience is defined here as the ability of a water resources management system to 'bounce back', i.e. absorb and then recover from a water deficit event, restoring the normal system operation. Ten alternative metrics are proposed and analysed addressing a range of different resilience aspects including duration, magnitude, frequency and volume of related water deficit events. The metrics were analysed on a real-world case study of the Bristol Water supply system in the UK and compared with current practice. The analyses included an examination of metrics' sensitivity and correlation, as well as a detailed examination into the behaviour of metrics during water deficit periods. The results obtained suggest that multiple metrics which cover different aspects of resilience should be used simultaneously when assessing the resilience of a water resources management system, leading to a more complete understanding of resilience compared with current practice approaches. It was also observed that calculating the total duration of a water deficit period provided a clearer and more consistent indication of system performance compared to splitting the deficit periods into the time to reach and time to recover from the worst deficit events.
Multi-metric calibration of hydrological model to capture overall flow regimes
NASA Astrophysics Data System (ADS)
Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian
2016-08-01
Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Xie, Y; Zhang, Y; Qin, W; Lu, S; Ni, C; Zhang, Q
2017-03-01
Increasing DTI studies have demonstrated that white matter microstructural abnormalities play an important role in type 2 diabetes mellitus-related cognitive impairment. In this study, the diffusional kurtosis imaging method was used to investigate WM microstructural alterations in patients with type 2 diabetes mellitus and to detect associations between diffusional kurtosis imaging metrics and clinical/cognitive measurements. Diffusional kurtosis imaging and cognitive assessments were performed on 58 patients with type 2 diabetes mellitus and 58 controls. Voxel-based intergroup comparisons of diffusional kurtosis imaging metrics were conducted, and ROI-based intergroup comparisons were further performed. Correlations between the diffusional kurtosis imaging metrics and cognitive/clinical measurements were assessed after controlling for age, sex, and education in both patients and controls. Altered diffusion metrics were observed in the corpus callosum, the bilateral frontal WM, the right superior temporal WM, the left external capsule, and the pons in patients with type 2 diabetes mellitus compared with controls. The splenium of the corpus callosum and the pons had abnormal kurtosis metrics in patients with type 2 diabetes mellitus. Additionally, altered diffusion metrics in the right prefrontal WM were significantly correlated with disease duration and attention task performance in patients with type 2 diabetes mellitus. With both conventional diffusion and additional kurtosis metrics, diffusional kurtosis imaging can provide additional information on WM microstructural abnormalities in patients with type 2 diabetes mellitus. Our results indicate that WM microstructural abnormalities occur before cognitive decline and may be used as neuroimaging markers for predicting the early cognitive impairment in patients with type 2 diabetes mellitus. © 2017 by American Journal of Neuroradiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, S. B.; Bihari, B.; Biruduganti, M.
Flame chemiluminescence is widely acknowledged to be an indicator of heat release rate in premixed turbulent flames that are representative of gas turbine combustion. Though heat release rate is an important metric for evaluating combustion strategies in reciprocating engine systems, its correlation with flame chemiluminescence is not well studied. To address this gap an experimental study was carried out in a single-cylinder natural gas fired reciprocating engine that could simulate turbocharged conditions with exhaust gas recirculation. Crank angle resolved spectra (266-795 nm) of flame luminosity were measured for various operational conditions by varying the ignition timing for MBT conditions andmore » by holding the speed at 1800 rpm and Brake Mean effective Pressure (BMEP) at 12 bar. The effect of dilution on CO*{sub 2}chemiluminescence intensities was studied, by varying the global equivalence ratio (0.6-1.0) and by varying the exhaust gas recirculation rate. It was attempted to relate the measured chemiluminescence intensities to thermodynamic metrics of importance to engine research -- in-cylinder bulk gas temperature and heat release rate (HRR) calculated from measured cylinder pressure signals. The peak of the measured CO*{sub 2} chemiluminescence intensities coincided with peak pressures within {+-}2 CAD for all test conditions. For each combustion cycle, the peaks of heat release rate, spectral intensity and temperature occurred in that sequence, well separated temporally. The peak heat release rates preceded the peak chemiluminescent emissions by 3.8-9.5 CAD, whereas the peak temperatures trailed by 5.8-15.6 CAD. Such a temporal separation precludes correlations on a crank-angle resolved basis. However, the peak cycle heat release rates and to a lesser extent the peak cycle temperatures correlated well with the chemiluminescent emission from CO*{sub 2}. Such observations point towards the potential use of flame chemiluminescence to monitor peak bulk gas temperatures as well as peak heat release rates in natural gas fired reciprocating engines.« less
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
Detecting population recovery using gametic disequilibrium-based effective population size estimates
David A. Tallmon; Robin S. Waples; Dave Gregovich; Michael K. Schwartz
2012-01-01
Recovering populations often must meet specific growth rate or abundance targets before their legal status can be changed from endangered or threatened. While the efficacy, power, and performance of population metrics to infer trends in declining populations has received considerable attention, how these same metrics perform when populations are increasing is less...
Language Games: University Responses to Ranking Metrics
ERIC Educational Resources Information Center
Heffernan, Troy A.; Heffernan, Amanda
2018-01-01
League tables of universities that measure performance in various ways are now commonplace, with numerous bodies providing their own rankings of how institutions throughout the world are seen to be performing on a range of metrics. This paper uses Lyotard's notion of language games to theorise that universities are regaining some power over being…
Design and Implementation of Performance Metrics for Evaluation of Assessments Data
ERIC Educational Resources Information Center
Ahmed, Irfan; Bhatti, Arif
2016-01-01
Evocative evaluation of assessment data is essential to quantify the achievements at course and program levels. The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes at the course levels for program accreditation. Even though…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... performance and service quality of intercity passenger train operations. In compliance with the statute, the FRA and Amtrak jointly drafted performance metrics and standards for intercity passenger rail service... and Standards for Intercity Passenger Rail Service under Section 207 of the Passenger Rail Investment...
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
NASA Technical Reports Server (NTRS)
Forbes, Kevin F.; Cyr, Chris St
2012-01-01
During solar cycle 22, a very intense geomagnetic storm on 13 March 1989 contributed to the collapse of the Hydro-Quebec power system in Canada. This event clearly demonstrated that geomagnetic storms have the potential to lead to blackouts. This paper addresses whether geomagnetic activity challenged power system reliability during solar cycle 23. Operations by PJM Interconnection, LLC (hereafter PJM), a regional transmission organization in North America, are examined over the period 1 April 2002 through 30 April 2004. During this time PJM coordinated the movement of wholesale electricity in all or parts of Delaware, Maryland, New Jersey, Ohio, Pennsylvania, Virginia, West Virginia, and the District of Columbia in the United States. We examine the relationship between a proxy of geomagnetically induced currents (GICs) and a metric of challenged reliability. In this study, GICs are proxied using magnetometer data from a geomagnetic observatory located just outside the PJM control area. The metric of challenged reliability is the incidence of out-of-economic-merit order dispatching due to adverse reactive power conditions. The statistical methods employed make it possible to disentangle the effects of GICs on power system operations from purely terrestrial factors. The results of the analysis indicate that geomagnetic activity can significantly increase the likelihood that the system operator will dispatch generating units based on system stability considerations rather than economic merit.
Multi-mode evaluation of power-maximizing cross-flow turbine controllers
Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James; ...
2017-09-21
A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less
Multi-mode evaluation of power-maximizing cross-flow turbine controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James
A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less
Health and Well-Being Metrics in Business: The Value of Integrated Reporting.
Pronk, Nicolaas P; Malan, Daniel; Christie, Gillian; Hajat, Cother; Yach, Derek
2018-01-01
Health and well-being (HWB) are material to sustainable business performance. Yet, corporate reporting largely lacks the intentional inclusion of HWB metrics. This brief report presents an argument for inclusion of HWB metrics into existing standards for corporate reporting. A Core Scorecard and a Comprehensive Scorecard, designed by a team of subject matter experts, based on available evidence of effectiveness, and organized around the categories of Governance, Management, and Evidence of Success, may be integrated into corporate reporting efforts. Pursuit of corporate integrated reporting requires corporate governance and ethical leadership and values that ultimately align with environmental, social, and economic performance. Agreement on metrics that intentionally include HWB may allow for integrated reporting that has the potential to yield significant value for business and society alike.
Using Publication Metrics to Highlight Academic Productivity and Research Impact
Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.
2016-01-01
This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141
Assessment of semi-active friction dampers
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo Braga; Coelho, Humberto Tronconi; Lepore Neto, Francisco Paulo; Mafhoud, Jarir
2017-09-01
The use of friction dampers has been widely proposed for a variety of mechanical systems for which applying viscoelastic materials, fluid based dampers or other viscous dampers is impossible. An important example is the application of friction dampers in aircraft engines to reduce the blades' vibration amplitudes. In most cases, friction dampers have been studied in a passive manner, but significant improvements can be achieved by controlling the normal force in the contact region. The aim of this paper is to present and study five control strategies for friction dampers based on three different hysteresis cycles by using the Harmonic Balance Method (HBM), a numerical and experimental analysis. The first control strategy uses the friction force as a resistance when the system is deviating from its equilibrium position. The second control strategy maximizes the energy removal in each harmonic oscillation cycle by calculating the optimal normal force based on the last displacement peak. The third control strategy combines the first strategy with the homogenous modulation of the friction force. Finally, the last two strategies attempt to predict the system's movement based on its velocity and acceleration and our knowledge of its physical properties. Numerical and experimental studies are performed with these five strategies, which define the performance metrics. The experimental testing rig is fully identified and its parameters are used for numerical simulations. The obtained results show the satisfactory performance of the friction damper and selected strategy and the suitable agreement between the numerical and experimental results.
NASA Astrophysics Data System (ADS)
McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.
2018-02-01
Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.
2016-01-01
Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567
Process-Driven Ecological Modeling for Landscape Change Analysis
NASA Astrophysics Data System (ADS)
Altman, S.; Reif, M. K.; Swannack, T. M.
2013-12-01
Landscape pattern is an important driver in ecosystem dynamics and can control system-level functions such as nutrient cycling, connectivity, biodiversity and carbon sequestration. However, the links between process, pattern and function remain ambiguous. Understanding the quantitative relationship between ecological processes and landscape pattern across temporal and spatial scales is vital for successful management and implementation of ecosystem-level projects. We used remote sensing imagery to develop critical landscape metrics to understand the factors influencing landscape change. Our study area, a coastal area in southwest Florida, is highly dynamic with critically eroding beaches and a range of natural and developed land cover types. Hurricanes in 2004 and 2005 caused a breach along the coast of North Captiva Island that filled in by 2010. We used a time series of light detection and ranging (lidar) elevation data and hyperspectral imagery from 2006 and 2010 to determine land cover changes. Landscape level metrics used included: Largest Patch Index, Class Area, Area-weighted mean area, Clumpiness, Area-weighted Contiguity Index, Number of Patches, Percent of landcover, Area-weighted Shape. Our results showed 1) 27% increase in sand/soil class as the channel repaired itself and shoreline was reestablished, 2) 40% decrease in the mudflat class area due to conversion to sand/soil and water, 3) 30% increase in non-wetland vegetation class as a result of new vegetation around the repaired channel, and 4) the water class only slightly increased though there was a marked increase in the patch size area. Thus, the smaller channels disappeared with the infilling of the channel, leaving much larger, less complex patches behind the breach. Our analysis demonstrated that quantification of landscape pattern is critical to linking patterns to ecological processes and understanding how both affect landscape change. Our proof of concept indicated that ecological processes can correlate to landscape pattern and that ecosystem function changes significantly as pattern changes. However, the number of links between landscape metrics and ecological processes are highly variable. Extensively studied processes such as biodiversity can be linked to numerous landscape metrics. In contrast, correlations between sediment cycling and landscape pattern have only been evaluated for a limited number of metrics. We are incorporating these data into a relational database linking landscape and ecological patterns, processes and metrics. The database will be used to parameterize site-specific landscape evolution models projecting how landscape pattern will change as a result of future ecosystem restoration projects. The model is a spatially-explicit, grid-based model that projects changes in community composition based on changes in soil elevations. To capture scalar differences in landscape change, local and regional landscape metrics are analyzed at each time step and correlated with ecological processes to determine how ecosystem function changes with scale over time.
Fusion set selection with surrogate metric in multi-atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-02-01
Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.
NASA Astrophysics Data System (ADS)
Ciaramello, Francis M.; Hemami, Sheila S.
2007-02-01
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl
2012-11-02
While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.
2012-01-01
While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386
Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.
IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, B K; Glenzer, S; Edwards, M J
The National Ignition Campaign (NIC) uses non-igniting 'THD' capsules to study and optimize the hydrodynamic assembly of the fuel without burn. These capsules are designed to simultaneously reduce DT neutron yield and to maintain hydrodynamic similarity with the DT ignition capsule. We will discuss nominal THD performance and the associated experimental observables. We will show the results of large ensembles of numerical simulations of THD and DT implosions and their simulated diagnostic outputs. These simulations cover a broad range of both nominal and off nominal implosions. We will focus on the development of an experimental implosion performance metric called themore » experimental ignition threshold factor (ITFX). We will discuss the relationship between ITFX and other integrated performance metrics, including the ignition threshold factor (ITF), the generalized Lawson criterion (GLC), and the hot spot pressure (HSP). We will then consider the experimental results of the recent NIC THD campaign. We will show that we can observe the key quantities for producing a measured ITFX and for inferring the other performance metrics. We will discuss trends in the experimental data, improvement in ITFX, and briefly the upcoming tuning campaign aimed at taking the next steps in performance improvement on the path to ignition on NIF.« less
Thermodynamic efficiency of nonimaging concentrators
NASA Astrophysics Data System (ADS)
Shatz, Narkis; Bortz, John; Winston, Roland
2009-08-01
The purpose of a nonimaging concentrator is to transfer maximal flux from the phase space of a source to that of a target. A concentrator's performance can be expressed relative to a thermodynamic reference. We discuss consequences of Fermat's principle of geometrical optics. We review étendue dilution and optical loss mechanisms associated with nonimaging concentrators, especially for the photovoltaic (PV) role. We introduce the concept of optical thermodynamic efficiency which is a performance metric combining the first and second laws of thermodynamics. The optical thermodynamic efficiency is a comprehensive metric that takes into account all loss mechanisms associated with transferring flux from the source to the target phase space, which may include losses due to inadequate design, non-ideal materials, fabrication errors, and less than maximal concentration. As such, this metric is a gold standard for evaluating the performance of nonimaging concentrators. Examples are provided to illustrate the use of this new metric. In particular we discuss concentrating PV systems for solar power applications.
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
A novel patient-centered "intention-to-treat" metric of U.S. lung transplant center performance.
Maldonado, Dawn A; RoyChoudhury, Arindam; Lederer, David J
2018-01-01
Despite the importance of pretransplantation outcomes, 1-year posttransplantation survival is typically considered the primary metric of lung transplant center performance in the United States. We designed a novel lung transplant center performance metric that incorporates both pre- and posttransplantation survival time. We performed an ecologic study of 12 187 lung transplant candidates listed at 56 U.S. lung transplant centers between 2006 and 2012. We calculated an "intention-to-treat" survival (ITTS) metric as the percentage of waiting list candidates surviving at least 1 year after transplantation. The median center-level 1-year posttransplantation survival rate was 84.1%, and the median center-level ITTS was 66.9% (mean absolute difference 19.6%, 95% limits of agreement 4.3 to 35.1%). All but 10 centers had ITTS values that were significantly lower than 1-year posttransplantation survival rates. Observed ITTS was significantly lower than expected ITTS for 7 centers. These data show that one third of lung transplant candidates do not survive 1 year after transplantation, and that 12% of centers have lower than expected ITTS. An "intention-to-treat" survival metric may provide a more realistic expectation of patient outcomes at transplant centers and may be of value to transplant centers and policymakers. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Evaluation schemes for video and image anomaly detection algorithms
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael
2016-05-01
Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.
NERC Policy 10: Measurement of two generation and load balancing IOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spicer, P.J.; Galow, G.G.
1999-11-01
Policy 10 will describe specific standards and metrics for most of the reliability functions described in the Interconnected Operations Services Working Group (IOS WG) report. The purpose of this paper is to discuss, in detail, the proposed metrics for two generation and load balancing IOSs: Regulation; Load Following. For purposes of this paper, metrics include both measurement and performance evaluation. The measurement methods discussed are included in the current draft of the proposed Policy 10. The performance evaluation method discussed is offered by the authors for consideration by the IOS ITF (Implementation Task Force) for inclusion into Policy 10.
Analysis of complex network performance and heuristic node removal strategies
NASA Astrophysics Data System (ADS)
Jahanpour, Ehsan; Chen, Xin
2013-12-01
Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.
An Underwater Color Image Quality Evaluation Metric.
Yang, Miao; Sowmya, Arcot
2015-12-01
Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.
NASA Astrophysics Data System (ADS)
Madison, Jonathan D.; Underwood, Olivia D.; Swiler, Laura P.; Boyce, Brad L.; Jared, Bradley H.; Rodelas, Jeff M.; Salzbrenner, Bradley C.
2018-04-01
The intrinsic relation between structure and performance is a foundational tenant of most all materials science investigations. While the specific form of this relation is dictated by material system, processing route and performance metric of interest, it is widely agreed that appropriate characterization of a material allows for greater accuracy in understanding and/or predicting material response. However, in the context of additive manufacturing, prior models and expectations of material performance must be revisited as performance often diverges from traditional values, even among well explored material systems. This work utilizes micro-computed tomography to quantify porosity and lack of fusion defects in an additively manufactured stainless steel and relates these metrics to performance across a statistically significant population using high-throughput mechanical testing. The degree to which performance in additively manufactured stainless steel can and cannot be correlated to detectable porosity will be presented and suggestions for performing similar experiments will be provided.
File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering
2013-03-21
33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
DeJournett, Jeremy; DeJournett, Leon
2017-11-01
Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (<40 mg/dL), mild hypoglycemia (40-69 mg/dL), normoglycemia (70-140 mg/dL), hyperglycemia (>140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.
Model Adaptation for Prognostics in a Particle Filtering Framework
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Jordan R. Mayor; Edward A.G. Schuur; Michelle C. Mack; Teresa N. Hollingsworth; Erland Bääth
2012-01-01
Global patterns in soil, plant, and fungal stable isotopes of N (15N) show promise as integrated metrics of N cycling, particularly the activity of ectomycorrhizal (ECM) fungi. At small spatial scales, however, it remains difficult to differentiate the underlying causes of plant 15N variability and this limits the...
Prediction of understory vegetation cover with airborne lidar in an interior ponderosa pine forest
Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Alix Gitelman; Michael J. Olsen
2012-01-01
Forest understory communities are important components in forest ecosystems providing wildlife habitat and influencing nutrient cycling, fuel loadings, fire behavior and tree species composition over time. One of the most widely utilized understory component metrics is understory vegetation cover, often used as a measure of vegetation abundance. To date, understory...
Measurements of key life history metrics of Coho salmon in Pudding Creek, California
David W. Wright; Sean P. Gallagher; Christopher J. Hannon
2012-01-01
Since 2005, a life cycle monitoring project in Pudding Creek, California, has utilized a variety of methodologies including an adult trap, spawning surveys, PIT tags, electro-fishing, and a smolt trap to estimate coho salmon adult escapement, juvenile abundance, juvenile growth, winter survival, and marine survival. Adult coho salmon escapement and smolt abundance are...
Thermal Performance Benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin
2016-06-07
The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge Nationalmore » Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.« less
Load-embedded inertial measurement unit reveals lifting performance.
Tammana, Aditya; McKay, Cody; Cain, Stephen M; Davidson, Steven P; Vitali, Rachel V; Ojeda, Lauro; Stirling, Leia; Perkins, Noel C
2018-07-01
Manual lifting of loads arises in many occupations as well as in activities of daily living. Prior studies explore lifting biomechanics and conditions implicated in lifting-induced injuries through laboratory-based experimental methods. This study introduces a new measurement method using load-embedded inertial measurement units (IMUs) to evaluate lifting tasks in varied environments outside of the laboratory. An example vertical load lifting task is considered that is included in an outdoor obstacle course. The IMU data, in the form of the load acceleration and angular velocity, is used to estimate load vertical velocity and three lifting performance metrics: the lifting time (speed), power, and motion smoothness. Large qualitative differences in these parameters distinguish exemplar high and low performance trials. These differences are further supported by subsequent statistical analyses of twenty three trials (including a total of 115 total lift/lower cycles) from fourteen healthy participants. Results reveal that lifting time is strongly correlated with lifting power (as expected) but also correlated with motion smoothness. Thus, participants who lift rapidly do so with significantly greater power using motions that minimize motion jerk. Copyright © 2018 Elsevier Ltd. All rights reserved.
Making the Case for Objective Performance Metrics in Newborn Screening by Tandem Mass Spectrometry
ERIC Educational Resources Information Center
Rinaldo, Piero; Zafari, Saba; Tortorelli, Silvia; Matern, Dietrich
2006-01-01
The expansion of newborn screening programs to include multiplex testing by tandem mass spectrometry requires understanding and close monitoring of performance metrics. This is not done consistently because of lack of defined targets, and interlaboratory comparison is almost nonexistent. Between July 2004 and April 2006 (N = 176,185 cases), the…
Krieger, Jonathan D
2014-08-01
I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.
First results from a combined analysis of CERN computing infrastructure metrics
NASA Astrophysics Data System (ADS)
Duellmann, Dirk; Nieke, Christian
2017-10-01
The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.
NASA Astrophysics Data System (ADS)
Jonsson, Rickard M.
2005-03-01
I present a way to visualize the concept of curved spacetime. The result is a curved surface with local coordinate systems (Minkowski systems) living on it, giving the local directions of space and time. Relative to these systems, special relativity holds. The method can be used to visualize gravitational time dilation, the horizon of black holes, and cosmological models. The idea underlying the illustrations is first to specify a field of timelike four-velocities uμ. Then, at every point, one performs a coordinate transformation to a local Minkowski system comoving with the given four-velocity. In the local system, the sign of the spatial part of the metric is flipped to create a new metric of Euclidean signature. The new positive definite metric, called the absolute metric, can be covariantly related to the original Lorentzian metric. For the special case of a two-dimensional original metric, the absolute metric may be embedded in three-dimensional Euclidean space as a curved surface.
NASA Astrophysics Data System (ADS)
Zaherpour, Jamal; Gosling, Simon N.; Mount, Nick; Müller Schmied, Hannes; Veldkamp, Ted I. E.; Dankers, Rutger; Eisner, Stephanie; Gerten, Dieter; Gudmundsson, Lukas; Haddeland, Ingjerd; Hanasaki, Naota; Kim, Hyungjun; Leng, Guoyong; Liu, Junguo; Masaki, Yoshimitsu; Oki, Taikan; Pokhrel, Yadu; Satoh, Yusuke; Schewe, Jacob; Wada, Yoshihide
2018-06-01
Global-scale hydrological models are routinely used to assess water scarcity, flood hazards and droughts worldwide. Recent efforts to incorporate anthropogenic activities in these models have enabled more realistic comparisons with observations. Here we evaluate simulations from an ensemble of six models participating in the second phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISIMIP2a). We simulate monthly runoff in 40 catchments, spatially distributed across eight global hydrobelts. The performance of each model and the ensemble mean is examined with respect to their ability to replicate observed mean and extreme runoff under human-influenced conditions. Application of a novel integrated evaluation metric to quantify the models’ ability to simulate timeseries of monthly runoff suggests that the models generally perform better in the wetter equatorial and northern hydrobelts than in drier southern hydrobelts. When model outputs are temporally aggregated to assess mean annual and extreme runoff, the models perform better. Nevertheless, we find a general trend in the majority of models towards the overestimation of mean annual runoff and all indicators of upper and lower extreme runoff. The models struggle to capture the timing of the seasonal cycle, particularly in northern hydrobelts, while in southern hydrobelts the models struggle to reproduce the magnitude of the seasonal cycle. It is noteworthy that over all hydrological indicators, the ensemble mean fails to perform better than any individual model—a finding that challenges the commonly held perception that model ensemble estimates deliver superior performance over individual models. The study highlights the need for continued model development and improvement. It also suggests that caution should be taken when summarising the simulations from a model ensemble based upon its mean output.
On Information Metrics for Spatial Coding.
Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L
2018-04-01
The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Merhi, Zaher O; Keltz, Julia; Zapantis, Athena; Younger, Joshua; Berger, Dara; Lieman, Harry J; Jindal, Sangita K; Polotsky, Alex J
2013-08-01
Male adiposity is detrimental for achieving clinical pregnancy rate (CPR) following assisted reproductive technologies (ART). The hypothesis that the association of male adiposity with decreased success following ART is mediated by worse embryo quality was tested. Retrospective study including 344 infertile couples undergoing in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) cycles was performed. Cycle determinants included number of oocytes retrieved, zygote PN-score, total number of embryos available on day 3, number of embryos transferred, composite day 3 grade for transferred embryos, composite day 3 grade per cycle, and CPR. Couples with male body mass index (BMI) over 25 kg m(-2) (overweight and obese) exhibited significantly lower CPR compared to their normal weight counterparts (46.7% vs. 32.0% respectively, P = 0.02). No significant difference was observed for any embryo quality metrics when analyzed by male BMI: mean zygote PN-scores, mean composite day 3 grades for transferred embryos or composite day 3 grades per cycle. In a multivariable logistic regression analysis adjusting for female age, female BMI, number of embryos transferred and sperm concentration, male BMI over 25 kg m(-2) was associated with a lower chance for CPR after IVF (OR = 0.17 [95% CI: 0.04-0.65]; P = 0.01) but not after ICSI cycles (OR = 0.88 [95% CI: 0.41-1.88]; P = 0.75). In this cohort, male adiposity was associated with decreased CPR following IVF but embryo quality was not affected. Embryo grading based on conventional morphologic criteria does not explain the poorer clinical pregnancy outcomes seen in couples with overweight or obese male partner. Copyright © 2013 The Obesity Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomizawa, Shinya; Nozawa, Masato
2006-06-15
We study vacuum solutions of five-dimensional Einstein equations generated by the inverse scattering method. We reproduce the black ring solution which was found by Emparan and Reall by taking the Euclidean Levi-Civita metric plus one-dimensional flat space as a seed. This transformation consists of two successive processes; the first step is to perform the three-solitonic transformation of the Euclidean Levi-Civita metric with one-dimensional flat space as a seed. The resulting metric is the Euclidean C-metric with extra one-dimensional flat space. The second is to perform the two-solitonic transformation by taking it as a new seed. Our result may serve asmore » a stepping stone to find new exact solutions in higher dimensions.« less
Quality evaluation of motion-compensated edge artifacts in compressed video.
Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R
2007-04-01
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Chrol-Cannon, Joseph; Jin, Yaochu
2014-01-01
Reservoir computing provides a simpler paradigm of training recurrent networks by initialising and adapting the recurrent connections separately to a supervised linear readout. This creates a problem, though. As the recurrent weights and topology are now separated from adapting to the task, there is a burden on the reservoir designer to construct an effective network that happens to produce state vectors that can be mapped linearly into the desired outputs. Guidance in forming a reservoir can be through the use of some established metrics which link a number of theoretical properties of the reservoir computing paradigm to quantitative measures that can be used to evaluate the effectiveness of a given design. We provide a comprehensive empirical study of four metrics; class separation, kernel quality, Lyapunov's exponent and spectral radius. These metrics are each compared over a number of repeated runs, for different reservoir computing set-ups that include three types of network topology and three mechanisms of weight adaptation through synaptic plasticity. Each combination of these methods is tested on two time-series classification problems. We find that the two metrics that correlate most strongly with the classification performance are Lyapunov's exponent and kernel quality. It is also evident in the comparisons that these two metrics both measure a similar property of the reservoir dynamics. We also find that class separation and spectral radius are both less reliable and less effective in predicting performance.
Andrew Taylor, R; Venkatesh, Arjun; Parwani, Vivek; Chekijian, Sharon; Shapiro, Marc; Oh, Andrew; Harriman, David; Tarabar, Asim; Ulrich, Andrew
2018-01-04
Emergency Department (ED) leaders are increasingly confronted with large amounts of data with the potential to inform and guide operational decisions. Routine use of advanced analytic methods may provide additional insights. To examine the practical application of available advanced analytic methods to guide operational decision making around patient boarding. Retrospective analysis of the effect of boarding on ED operational metrics from a single site between 1/2015 and 1/2017. Times series were visualized through decompositional techniques accounting for seasonal trends, to determine the effect of boarding on ED performance metrics and to determine the impact of boarding "shocks" to the system on operational metrics over several days. There were 226,461 visits with the mean (IQR) number of visits per day was 273 (258-291). Decomposition of the boarding count time series illustrated an upward trend in the last 2-3 quarters as well as clear seasonal components. All performance metrics were significantly impacted (p<0.05) by boarding count, except for overall Press Ganey scores (p<0.65). For every additional increase in boarder count, overall length-of-stay (LOS) increased by 1.55min (0.68, 1.50). Smaller effects were seen for waiting room LOS and treat and release LOS. The impulse responses indicate that the boarding shocks are characterized by changes in the performance metrics within the first day that fade out after 4-5days. In this study regarding the use of advanced analytics in daily ED operations, time series analysis provided multiple useful insights into boarding and its impact on performance metrics. Copyright © 2018. Published by Elsevier Inc.
Life Cycle Assessment and Cost Analysis of Water and ...
changes in drinking and wastewater infrastructure need to incorporate a holistic view of the water service sustainability tradeoffs and potential benefits when considering shifts towards new treatment technology, decentralized systems, energy recovery and reuse of treated wastewater. The main goal of this study is to determine the influence of scale on the energy and cost performance of different transitional membrane bioreactors (MBR) in decentralized wastewater treatment (WWT) systems by performing a life cycle assessment (LCA) and cost analysis. LCA is a tool used to quantify sustainability-related metrics from a systems perspective. The study calculates the environmental and cost profiles of both aerobic MBRs (AeMBR) and anaerobic MBRs (AnMBR), which not only recover energy from waste, but also produce recycled water that can displace potable water for uses such as irrigation and toilet flushing. MBRs represent an intriguing technology to provide decentralized WWT services while maximizing resource recovery. A number of scenarios for these WWT technologies are investigated for different scale systems serving various population density and land area combinations to explore the ideal application potentials. MBR systems are examined from 0.05 million gallons per day (MGD) to 10 MGD and serve land use types from high density urban (100,000 people per square mile) to semi-rural single family (2,000 people per square mile). The LCA and cost model was built with ex
A decision-making framework for total ownership cost management of complex systems: A Delphi study
NASA Astrophysics Data System (ADS)
King, Russel J.
This qualitative study, using a modified Delphi method, was conducted to develop a decision-making framework for the total ownership cost management of complex systems in the aerospace industry. The primary focus of total ownership cost is to look beyond the purchase price when evaluating complex system life cycle alternatives. A thorough literature review and the opinions of a group of qualified experts resulted in a compilation of total ownership cost best practices, cost drivers, key performance factors, applicable assessment methods, practitioner credentials and potential barriers to effective implementation. The expert panel provided responses to the study questions using a 5-point Likert-type scale. Data were analyzed and provided to the panel members for review and discussion with the intent to achieve group consensus. As a result of the study, the experts agreed that a total ownership cost analysis should (a) be as simple as possible using historical data; (b) establish cost targets, metrics, and penalties early in the program; (c) monitor the targets throughout the product lifecycle and revise them as applicable historical data becomes available; and (d) directly link total ownership cost elements with other success factors during program development. The resultant study framework provides the business leader with incentives and methods to develop and implement strategies for controlling and reducing total ownership cost over the entire product life cycle when balancing cost, schedule, and performance decisions.
Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars
2017-06-01
The societies of thoracic surgery are working to incorporate simulation and competency-based assessment into specialty training. One challenge is the development of a simulation-based test, which can be used as an assessment tool. The study objective was to establish validity evidence for a virtual reality simulator test of a video-assisted thoracoscopic surgery (VATS) lobectomy of a right upper lobe. Participants with varying experience in VATS lobectomy were included. They were familiarized with a virtual reality simulator (LapSim ® ) and introduced to the steps of the procedure for a VATS right upper lobe lobectomy. The participants performed two VATS lobectomies on the simulator with a 5-min break between attempts. Nineteen pre-defined simulator metrics were recorded. Fifty-three participants from nine different countries were included. High internal consistency was found for the metrics with Cronbach's alpha coefficient for standardized items of 0.91. Significant test-retest reliability was found for 15 of the metrics (p-values <0.05). Significant correlations between the metrics and the participants VATS lobectomy experience were identified for seven metrics (p-values <0.001), and 10 metrics showed significant differences between novices (0 VATS lobectomies performed) and experienced surgeons (>50 VATS lobectomies performed). A pass/fail level defined as approximately one standard deviation from the mean metric scores for experienced surgeons passed none of the novices (0 % false positives) and failed four of the experienced surgeons (29 % false negatives). This study is the first to establish validity evidence for a VATS right upper lobe lobectomy virtual reality simulator test. Several simulator metrics demonstrated significant differences between novices and experienced surgeons and pass/fail criteria for the test were set with acceptable consequences. This test can be used as a first step in assessing thoracic surgery trainees' VATS lobectomy competency.
Kumar, B. Vinodh; Mohan, Thuthi
2018-01-01
OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587
Raza, Ali S.; Zhang, Xian; De Moraes, Carlos G. V.; Reisman, Charles A.; Liebmann, Jeffrey M.; Ritch, Robert; Hood, Donald C.
2014-01-01
Purpose. To improve the detection of glaucoma, techniques for assessing local patterns of damage and for combining structure and function were developed. Methods. Standard automated perimetry (SAP) and frequency-domain optical coherence tomography (fdOCT) data, consisting of macular retinal ganglion cell plus inner plexiform layer (mRGCPL) as well as macular and optic disc retinal nerve fiber layer (mRNFL and dRNFL) thicknesses, were collected from 52 eyes of 52 healthy controls and 156 eyes of 96 glaucoma suspects and patients. In addition to generating simple global metrics, SAP and fdOCT data were searched for contiguous clusters of abnormal points and converted to a continuous metric (pcc). The pcc metric, along with simpler methods, was used to combine the information from the SAP and fdOCT. The performance of different methods was assessed using the area under receiver operator characteristic curves (AROC scores). Results. The pcc metric performed better than simple global measures for both the fdOCT and SAP. The best combined structure-function metric (mRGCPL&SAP pcc, AROC = 0.868 ± 0.032) was better (statistically significant) than the best metrics for independent measures of structure and function. When SAP was used as part of the inclusion and exclusion criteria, AROC scores increased for all metrics, including the best combined structure-function metric (AROC = 0.975 ± 0.014). Conclusions. A combined structure-function metric improved the detection of glaucomatous eyes. Overall, the primary sources of value-added for glaucoma detection stem from the continuous cluster search (the pcc), the mRGCPL data, and the combination of structure and function. PMID:24408977
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Software Quality Metrics Enhancements. Volume 1
1980-04-01
the mathematical relationships which relate metrics to ratings of the various quality factors) for factors which were not validated previously were...function, provides a mathematical relationship between the metrics and the quality factors. (3) Validation of these normalization functions was performed by...samples, further research is needed before a high degree of confidence can be placed on the mathematical relationships established to date l (3.3.3) 6
40 CFR 63.606 - Performance tests and compliance provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g (453,600 mg/lb). (2) Method... fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi = concentration of total fluorides from... Where: Mp = total mass flow rate of phosphorus-bearing feed, metric ton/hr (ton/hr). Rp = P2O5 content...
40 CFR 63.606 - Performance tests and compliance provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g (453,600 mg/lb). (2) Method... fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi = concentration of total fluorides from... Where: Mp = total mass flow rate of phosphorus-bearing feed, metric ton/hr (ton/hr). Rp = P2O5 content...
40 CFR 63.626 - Performance tests and compliance provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... affected facility. P = equivalent P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g... P2O5 stored, metric tons (tons). K = conversion factor, 1000 mg/g (453,600 mg/lb). (ii) Method 13A or... Where: E = emission rate of total fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi...
40 CFR 63.606 - Performance tests and compliance provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g (453,600 mg/lb). (2) Method... fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi = concentration of total fluorides from... Where: Mp = total mass flow rate of phosphorus-bearing feed, metric ton/hr (ton/hr). Rp = P2O5 content...
40 CFR 63.626 - Performance tests and compliance provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... affected facility. P = equivalent P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g... P2O5 stored, metric tons (tons). K = conversion factor, 1000 mg/g (453,600 mg/lb). (ii) Method 13A or... Where: E = emission rate of total fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi...
40 CFR 63.626 - Performance tests and compliance provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... affected facility. P = equivalent P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g... P2O5 stored, metric tons (tons). K = conversion factor, 1000 mg/g (453,600 mg/lb). (ii) Method 13A or... Where: E = emission rate of total fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi...
Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc
2014-09-25
There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.
Madison, Guy
2014-03-01
Timing performance becomes less precise for longer intervals, which makes it difficult to achieve simultaneity in synchronisation with a rhythm. The metrical structure of music, characterised by hierarchical levels of binary or ternary subdivisions of time, may function to increase precision by providing additional timing information when the subdivisions are explicit. This hypothesis was tested by comparing synchronisation performance across different numbers of metrical levels conveyed by loudness of sounds, such that the slowest level was loudest and the fastest was softest. Fifteen participants moved their hand with one of 9 inter-beat intervals (IBIs) ranging from 524 to 3,125 ms in 4 metrical level (ML) conditions ranging from 1 (one movement for each sound) to 4 (one movement for every 8th sound). The lowest relative variability (SD/IBI<1.5%) was obtained for the 3 longest IBIs (1600-3,125 ms) and MLs 3-4, significantly less than the smallest value (4-5% at 524-1024 ms) for any ML 1 condition in which all sounds are identical. Asynchronies were also more negative with higher ML. In conclusion, metrical subdivision provides information that facilitates temporal performance, which suggests an underlying neural multi-level mechanism capable of integrating information across levels. © 2013.
NASA Astrophysics Data System (ADS)
Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik
2015-06-01
As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.
Defining and quantifying users' mental Imagery-based BCI skills: a first step.
Lotte, Fabien; Jeunet, Camille
2018-05-17
While promising for many applications, Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are still scarcely used outside laboratories, due to a poor reliability. It is thus necessary to study and fix this reliability issue. Doing so requires the use of appropriate reliability metrics to quantify both the classification algorithm and the BCI user's performances. So far, Classification Accuracy (CA) is the typical metric used for both aspects. However, we argue in this paper that CA is a poor metric to study BCI users' skills. Here, we propose a definition and new metrics to quantify such BCI skills for Mental Imagery (MI) BCIs, independently of any classification algorithm. Approach: We first show in this paper that CA is notably unspecific, discrete, training data and classifier dependent, and as such may not always reflect successful self-modulation of EEG patterns by the user. We then propose a definition of MI-BCI skills that reflects how well the user can self-modulate EEG patterns, and thus how well he could control an MI-BCI. Finally, we propose new performance metrics, classDis, restDist and classStab that specifically measure how distinct and stable the EEG patterns produced by the user are, independently of any classifier. Main results: By re-analyzing EEG data sets with such new metrics, we indeed confirmed that CA may hide some increase in MI-BCI skills or hide the user inability to self-modulate a given EEG pattern. On the other hand, our new metrics could reveal such skill improvements as well as identify when a mental task performed by a user was no different than rest EEG. Significance: Our results showed that when studying MI-BCI users' skills, CA should be used with care, and complemented with metrics such as the new ones proposed. Our results also stressed the need to redefine BCI user training by considering the different BCI subskills and their measures. To promote the complementary use of our new metrics, we provide the Matlab code to compute them for free and open-source. © 2018 IOP Publishing Ltd.
Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.
2015-01-01
Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.
Chemical exchange rotation transfer (CERT) on human brain at 3 Tesla.
Lin, Eugene C; Li, Hua; Zu, Zhongliang; Louie, Elizabeth A; Lankford, Christopher L; Dortch, Richard D; Does, Mark D; Gore, John C; Gochberg, Daniel F
2018-05-25
To test the ability of a novel pulse sequence applied in vivo at 3 Tesla to separate the contributions to the water signal from amide proton transfer (APT) and relayed nuclear Overhauser enhancement (rNOE) from background direct water saturation and semisolid magnetization transfer (MT). The lack of such signal source isolation has confounded conventional chemical exchange saturation transfer (CEST) imaging. We quantified APT and rNOE signals using a chemical exchange rotation transfer (CERT) metric, MTR double . A range of duty cycles and average irradiation powers were applied, and results were compared with conventional CEST analyses using asymmetry (MTR asym ) and extrapolated magnetization transfer (EMR). Our results indicate that MTR double is more specific than MTR asym and, because it requires as few as 3 data points, is more rapid than methods requiring a complete Z-spectrum, such as EMR. In white matter, APT (1.5 ± 0.5%) and rNOE (2.1 ± 0.7%) were quantified by using MTR double with a 30% duty cycle and a 0.5-µT average power. In addition, our results suggest that MTR double is insensitive to B 0 inhomogeneity, further magnifying its speed advantage over CEST metrics that require a separate B 0 measurement. However, MTR double still has nontrivial sensitivity to B 1 inhomogeneities. We demonstrated that MTR double is an alternative metric to evaluate APT and rNOE, which is fast, robust to B 0 inhomogeneity, and easy to process. © 2018 International Society for Magnetic Resonance in Medicine.
Characterizing storm response and recovery using the beach change envelope: Fire Island, New York
Brenner, Owen T.; Lentz, Erika; Hapke, Cheryl J.; Henderson, Rachel; Wilson, Kathleen; Nelson, Timothy
2018-01-01
Hurricane Sandy at Fire Island, New York presented unique challenges in the quantification of storm impacts using traditional metrics of coastal change, wherein measured changes (shoreline, dune crest, and volume change) did not fully reflect the substantial changes in sediment redistribution following the storm. We used a time series of beach profile data at Fire Island, New York to define a new contour-based morphologic change metric, the Beach Change Envelope (BCE). The BCE quantifies changes to the upper portion of the beach likely to sustain measurable impacts from storm waves and capture a variety of storm and post-storm beach states. We evaluated the ability of the BCE to characterize cycles of beach change by relating it to a conceptual beach recovery regime, and demonstrated that BCE width and BCE height from the profile time series correlate well with established stages of recovery. We also investigated additional applications of this metric to capture impacts from storms and human modification by applying it to several post-storm historical datasets in which impacts varied considerably; Nor'Ida (2009), Hurricane Irene (2011), Hurricane Sandy (2012), and a 2009 community replenishment. In each case, the BCE captured distinctive upper beach morphologic change characteristic of these different beach building and erosional events. Analysis of the beach state at multiple profile locations showed spatial trends in recovery consistent with recent morphologic island evolution, which other studies have linked with sediment availability and the geologic framework. Ultimately we demonstrate a new way of more effectively characterizing beach response and recovery cycles to evaluate change along sandy coasts.
Goodman, Corey W; Major, Heather J; Walls, William D; Sheffield, Val C; Casavant, Thomas L; Darbro, Benjamin W
2015-04-01
Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. Copyright © 2015 Elsevier Inc. All rights reserved.
Substantial Progress Yet Significant Opportunity for Improvement in Stroke Care in China.
Li, Zixiao; Wang, Chunjuan; Zhao, Xingquan; Liu, Liping; Wang, Chunxue; Li, Hao; Shen, Haipeng; Liang, Li; Bettger, Janet; Yang, Qing; Wang, David; Wang, Anxin; Pan, Yuesong; Jiang, Yong; Yang, Xiaomeng; Zhang, Changqing; Fonarow, Gregg C; Schwamm, Lee H; Hu, Bo; Peterson, Eric D; Xian, Ying; Wang, Yilong; Wang, Yongjun
2016-11-01
Stroke is a leading cause of death in China. Yet the adherence to guideline-recommended ischemic stroke performance metrics in the past decade has been previously shown to be suboptimal. Since then, several nationwide stroke quality management initiatives have been conducted in China. We sought to determine whether adherence had improved since then. Data were obtained from the 2 phases of China National Stroke Registries, which included 131 hospitals (12 173 patients with acute ischemic stroke) in China National Stroke Registries phase 1 from 2007 to 2008 versus 219 hospitals (19 604 patients) in China National Stroke Registries phase 2 from 2012 to 2013. Multiple regression models were developed to evaluate the difference in adherence to performance measure between the 2 study periods. The overall quality of care has improved over time, as reflected by the higher composite score of 0.76 in 2012 to 2013 versus 0.63 in 2007 to 2008. Nine of 13 individual performance metrics improved. However, there were no significant improvements in the rates of intravenous thrombolytic therapy and anticoagulation for atrial fibrillation. After multivariate analysis, there remained a significant 1.17-fold (95% confidence interval, 1.14-1.21) increase in the odds of delivering evidence-based performance metrics in the more recent time periods versus older data. The performance metrics with the most significantly increased odds included stroke education, dysphagia screening, smoking cessation, and antithrombotics at discharge. Adherence to stroke performance metrics has increased over time, but significant opportunities remain for further improvement. Continuous stroke quality improvement program should be developed as a national priority in China. © 2016 American Heart Association, Inc.
Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, N. C.; Taylor, P. C.
2014-12-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.
Clinical Outcome Metrics for Optimization of Robust Training
NASA Technical Reports Server (NTRS)
Ebert, Doug; Byrne, Vicky; Cole, Richard; Dulchavsky, Scott; Foy, Millennia; Garcia, Kathleen; Gibson, Robert; Ham, David; Hurst, Victor; Kerstman, Eric;
2015-01-01
The objective of this research is to develop and use clinical outcome metrics and training tools to quantify the differences in performance of a physician vs non-physician crew medical officer (CMO) analogues during simulations.
Béjaoui-Omri, Amel; Béjaoui, Béchir; Harzallah, Ali; Aloui-Béjaoui, Nejla; El Bour, Monia; Aleya, Lotfi
2014-11-01
Mussel farming is the main economic activity in Bizerte Lagoon, with a production that fluctuates depending on environmental factors. In the present study, we apply a bioenergetic growth model to the mussel Mytilus galloprovincialis, based on dynamic energy budget (DEB) theory which describes energy flux variation through the different compartments of the mussel body. Thus, the present model simulates both mussel growth and sexual cycle steps according to food availability and water temperature and also the effect of climate change on mussel behavior and reproduction. The results point to good concordance between simulations and growth parameters (metric length and weight) for mussels in the lagoon. A heat wave scenario was also simulated using the DEB model, which highlighted mussel mortality periods during a period of high temperature.
Nicodème, F; Pipa-Muniz, M; Khanna, K; Kahrilas, P J; Pandolfino, J E
2014-03-01
Despite its obvious pathophysiological relevance, the clinical utility of measures of esophagogastric junction (EGJ) contractility is unsubstantiated. High-resolution manometry (HRM) may improve upon this with its inherent ability to integrate the magnitude of contractility over time and length of the EGJ. This study aimed to develop a novel HRM metric summarizing EGJ contractility and test its ability distinguish among subgroups of proton pump inhibitor non-responders (PPI-NRs). 75 normal controls and 88 PPI-NRs were studied. All underwent HRM. PPI-NRs underwent pH-impedance monitoring on PPI therapy scored in terms of acid exposure, number of reflux events, and reflux-symptom correlation and grouped as meeting all criteria, some criteria, or no criteria of abnormality. Control HRM studies were used to establish normal values for candidate EGJ contractility metrics, which were then compared in their ability to differentiate among PPI-NR subgroups. The EGJ contractile integral (EGJ-CI), a metric integrating contractility across the EGJ for three respiratory cycles, best distinguished the All Criteria PPI-NR subgroup from controls and other PPI-NR subgroups. Normal values (median, [IQR]) for this measure were 39 mmHg-cm [25-55 mmHg-cm]. The correlation between the EGJ-CI and a previously proposed metric, the lower esophageal sphincter-pressure integral, that used a fixed 10 s time frame and an atmospheric as opposed to gastric pressure reference was weak. Among HRM metrics tested, the EGJ-CI was best in distinguishing PPI-NRs meeting all criteria of abnormality on pH-impedance testing. Future prospective studies are required to explore its utility in management of broader groups of gastroesophageal reflux disease patients. © 2013 John Wiley & Sons Ltd.
Towards New Metrics for High-Performance Computing Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian
Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less
Hypoxic Hypoxia at Moderate Altitudes: State of the Science
2011-05-01
neuropsychological metrics (surrogate investigational end points) with actual flight task metrics (desired end points of interest) under moderate hypoxic...conditions, (2) determine efficacy of potential neuropsychological performance-enhancing agents (e.g. tyrosine supplementation) for both acute and chronic...to air hunger ; may impact training fidelity Banderet et al. (1985) 4200 and 4700 m H 27 Tyrosine enhanced performance and reduced subjective
Wave equations on anti self dual (ASD) manifolds
NASA Astrophysics Data System (ADS)
Bashingwa, Jean-Juste; Kara, A. H.
2017-11-01
In this paper, we study and perform analyses of the wave equation on some manifolds with non diagonal metric g_{ij} which are of neutral signatures. These include the invariance properties, variational symmetries and conservation laws. In the recent past, wave equations on the standard (space time) Lorentzian manifolds have been performed but not on the manifolds from metrics of neutral signatures.
ERIC Educational Resources Information Center
Calucag, Lina S.; Talisic, Geraldo C.; Caday, Aileen B.
2016-01-01
This is a correlational study research design, which aimed to determine the correlation of admission metrics with eventual success in mathematics academic performance of the admitted 177 first year students of Bachelor of Science in Business Informatics and 59 first year students of Bachelor of Science in International Studies. Using Pearson's…
Estimating seasonal evapotranspiration from temporal satellite images
Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.
2012-01-01
Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.
NASA Astrophysics Data System (ADS)
Witherell, B. B.; Bain, D. J.; Salant, N.; Aloysius, N. R.
2009-12-01
Humans impact the hydrologic cycle at local, regional and global scales. Understanding how spatial patterns of human water use and hydrologic impact have changed over time is important to future water management in an era of increasing water constraints and globalization of high water-use resources. This study investigates spatial dependence and spatial patterns of hydro-social metrics for the Northeastern United States from 1600 to 1920 through the use of spatial statistical techniques. Several relevant hydro-social metrics, including water residence time, surface water storage (natural and human engineered) and per capita water availability, are analyzed. This study covers a region and period of time that saw significant population growth, landscape change, and industrial growth. These changes had important impacts on water availability. Although some changes such as the elimination of beavers, and the resulting loss of beaver ponds on low-order streams, are felt at a regional scale, preliminary analysis indicates that humans responded to water constraints by acting locally (e.g., mill ponds for water power and water supply reservoirs for public health). This 320-year historical analysis of spatial patterns of hydro-social metrics provides unique insight into long-term changes in coupled human-water systems.
Evaluation of eye metrics as a detector of fatigue.
McKinley, R Andy; McIntire, Lindsey K; Schmidt, Regina; Repperger, Daniel W; Caldwell, John A
2011-08-01
This study evaluated oculometrics as a detector of fatigue in Air Force-relevant tasks after sleep deprivation. Using the metrics of total eye closure duration (PERCLOS) and approximate entropy (ApEn), the relation between these eye metrics and fatigue-induced performance decrements was investigated. One damaging effect to the successful outcome of operational military missions is that attributed to sleep deprivation-induced fatigue. Consequently, there is interest in the development of reliable monitoring devices that can assess when an operator is overly fatigued. Ten civilian participants volunteered to serve in this study. Each was trained on three performance tasks: target identification, unmanned aerial vehicle landing, and the psychomotor vigilance task (PVT). Experimental testing began after 14 hr awake and continued every 2 hr until 28 hr of sleep deprivation was reached. Performance on the PVT and target identification tasks declined significantly as the level of sleep deprivation increased.These performance declines were paralleled more closely by changes in the ApEn compared to the PERCLOS measure. The results provide evidence that the ApEn eye metric can be used to detect fatigue in relevant military aviation tasks. Military and commercial operators could benefit from an alertness monitoring device.
Partitioning the Fitness Components of RNA Populations Evolving In Vitro
Díaz Arenas, Carolina; Lehman, Niles
2013-01-01
All individuals in an evolving population compete for resources, and their performance is measured by a fitness metric. The performance of the individuals is relative to their abilities and to the biotic surroundings – the conditions under which they are competing – and involves many components. Molecules evolving in a test tube can also face complex environments and dynamics, and their fitness measurements should reflect the complexity of various contributing factors as well. Here, the fitnesses of a set of ligase ribozymes evolved by the continuous in vitro evolution system were measured. During these evolution cycles there are three different catalytic steps, ligation, reverse transcription, and forward transcription, each with a potential differential influence on the total fitness of each ligase. For six distinct ligase ribozyme genotypes that resulted from continuous evolution experiments, the rates of reaction were measured for each catalytic step by tracking the kinetics of enzymes reacting with their substrates. The reaction products were analyzed for the amount of product formed per time. Each catalytic step of the evolution cycle was found to have a differential incidence in the total fitness of the ligases, and therefore the total fitness of any ligase cannot be inferred from only one catalytic step of the evolution cycle. Generally, the ribozyme-directed ligation step tends to impart the largest effect on overall fitness. Yet it was found that the ligase genotypes have different absolute fitness values, and that they exploit different stages of the overall cycle to gain a net advantage. This is a new example of molecular niche partitioning that may allow for coexistence of more than one species in a population. The dissection of molecular events into multiple components of fitness provides new insights into molecular evolutionary studies in the laboratory, and has the potential to explain heretofore counterintuitive findings. PMID:24391957
Developing a Security Metrics Scorecard for Healthcare Organizations.
Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea
2015-01-01
In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George
2010-01-01
The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
Adiansyah, Joni Safaat; Haque, Nawshad; Rosano, Michele; Biswas, Wahidul
2017-09-01
This study compares coal mine tailings management strategies using life cycle assessment (LCA) and land-use area metrics methods. Hybrid methods (the Australian indicator set and the ReCiPe method) were used to assess the environmental impacts of tailings management strategies. Several strategies were considered: belt filter press (OPT 1), tailings paste (OPT 2), thickened tailings (OPT 3), and variations of OPT 1 using combinations of technology improvement and renewable energy sources (OPT 1A-D). Electrical energy was found to contribute more than 90% of the environmental impacts. The magnitude of land-use impacts associated with OPT 3 (thickened tailings) were 2.3 and 1.55 times higher than OPT 1 (tailings cake) and OPT 2 (tailings paste) respectively, while OPT 1B (tailings belt filter press with technology improvement and solar energy) and 1D (tailings belt press filter with technology improvement and wind energy) had the lowest ratio of environmental impact to land-use. Further analysis of an economic cost model and reuse opportunities is required to aid decision making on sustainable tailings management and industrial symbiosis. Copyright © 2017 Elsevier Ltd. All rights reserved.
A theory-based curriculum design for remediation of residents' communication skills.
Leung, Fok-Han; Martin, Dawn; Batty, Helen
2009-12-01
Residents requiring remediation are often deficient in communication skills, namely clinical interviewing skills. Residents have to digest large amounts of knowledge, and then apply it in a clinical interview. The patient-centered approach, as demonstrated in the Calgary-Cambridge model and Martin's Map, can be difficult to teach. Before implementing a remediation curriculum, the theoretical educational underpinnings must be sound; curriculum evaluation is often expensive. Before establishing metrics for curriculum evaluation, a starting point is to perform a mental experiment to test theoretical adherence. This article describes an experiential remedial curriculum for communication skills. Educational theories of Kolb, Knowles, Bandura, and Bloom are used to design the curriculum into theory-based design components. Kolb's experiential cycle models the natural sequence of experiencing, teaching, and learning interviewing skills. A curriculum structured around this cycle has multiple intercalations with the above educational theories. The design is strengthened by appropriately timed use of education strategies such as learning contracts, taped interviews, simulations, structured reflection, and teacher role modeling. Importantly, it also models the form of the clinical interview format desired. Through understanding and application of contemporary educational theories, a program to remediate interviewing skills can increase its potential for success.
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
The compressed average image intensity metric for stereoscopic video quality assessment
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2016-09-01
The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.
Correlation between Thermodynamic Efficiency and Ecological Cyclicity for Thermodynamic Power Cycles
Layton, Astrid; Reap, John; Bras, Bert; Weissburg, Marc
2012-01-01
A sustainable global community requires the successful integration of environment and engineering. In the public and private sectors, designing cyclical (“closed loop”) resource networks increasingly appears as a strategy employed to improve resource efficiency and reduce environmental impacts. Patterning industrial networks on ecological ones has been shown to provide significant improvements at multiple levels. Here, we apply the biological metric cyclicity to 28 familiar thermodynamic power cycles of increasing complexity. These cycles, composed of turbines and the like, are scientifically very different from natural ecosystems. Despite this difference, the application results in a positive correlation between the maximum thermal efficiency and the cyclic structure of the cycles. The immediate impact of these findings results in a simple method for comparing cycles to one another, higher cyclicity values pointing to those cycles which have the potential for a higher maximum thermal efficiency. Such a strong correlation has the promise of impacting both natural ecology and engineering thermodynamics and provides a clear motivation to look for more fundamental scientific connections between natural and engineered systems. PMID:23251638
Effects of ocean thermocline variability on noncoherent underwater acoustic communications.
Siderius, Martin; Porter, Michael B; Hursky, Paul; McDonald, Vincent
2007-04-01
The performance of acoustic modems in the ocean is strongly affected by the ocean environment. A storm can drive up the ambient noise levels, eliminate a thermocline by wind mixing, and whip up violent waves and thereby break up the acoustic mirror formed by the ocean surface. The combined effects of these and other processes on modem performance are not well understood. The authors have been conducting experiments to study these environmental effects on various modulation schemes. Here the focus is on the role of the thermocline on a widely used modulation scheme (frequency-shift keying). Using data from a recent experiment conducted in 100-m-deep water off the coast of Kauai, HI, frequency-shift-key modulation performance is shown to be strongly affected by diurnal cycles in the thermocline. There is dramatic variation in performance (measured by bit error rates) between receivers in the surface duct and receivers in the thermocline. To interpret the performance variations in a quantitative way, a precise metric is introduced based on a signal-to-interference-noise ratio that encompasses both the ambient noise and intersymbol interference. Further, it will be shown that differences in the fading statistics for receivers in and out of the thermocline explain the differences in modem performance.
Developing a Common Metric for Evaluating Police Performance in Deadly Force Situations
2012-08-27
2005).“Police Inservice Deadly Force Training and Requalification in Washington State.” Law Enforcement Executive Forum, 5(2):67-86. NIJ Metric...OF: EXECUTIVE SUMMARY Background There is a critical lack of scientific evidence about whether deadly force management, accountability and training ...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS training metrics develoment, deadly encounters
Evaluation of image quality metrics for the prediction of subjective best focus.
Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S
2010-03-01
Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.
Ellerbe, Laura S; Manfredi, Luisa; Gupta, Shalini; Phelps, Tyler E; Bowe, Thomas R; Rubinsky, Anna D; Burden, Jennifer L; Harris, Alex H S
2017-04-04
In the U.S. Department of Veterans Affairs (VA), residential treatment programs are an important part of the continuum of care for patients with a substance use disorder (SUD). However, a limited number of program-specific measures to identify quality gaps in SUD residential programs exist. This study aimed to: (1) Develop metrics for two pre-admission processes: Wait Time and Engagement While Waiting, and (2) Interview program management and staff about program structures and processes that may contribute to performance on these metrics. The first aim sought to supplement the VA's existing facility-level performance metrics with SUD program-level metrics in order to identify high-value targets for quality improvement. The second aim recognized that not all key processes are reflected in the administrative data, and even when they are, new insight may be gained from viewing these data in the context of day-to-day clinical practice. VA administrative data from fiscal year 2012 were used to calculate pre-admission metrics for 97 programs (63 SUD Residential Rehabilitation Treatment Programs (SUD RRTPs); 34 Mental Health Residential Rehabilitation Treatment Programs (MH RRTPs) with a SUD track). Interviews were then conducted with management and front-line staff to learn what factors may have contributed to high or low performance, relative to the national average for their program type. We hypothesized that speaking directly to residential program staff may reveal innovative practices, areas for improvement, and factors that may explain system-wide variability in performance. Average wait time for admission was 16 days (SUD RRTPs: 17 days; MH RRTPs with a SUD track: 11 days), with 60% of Veterans waiting longer than 7 days. For these Veterans, engagement while waiting occurred in an average of 54% of the waiting weeks (range 3-100% across programs). Fifty-nine interviews representing 44 programs revealed factors perceived to potentially impact performance in these domains. Efficient screening processes, effective patient flow, and available beds were perceived to facilitate shorter wait times, while lack of beds, poor staffing levels, and lengths of stay of existing patients were thought to lengthen wait times. Accessible outpatient services, strong patient outreach, and strong encouragement of pre-admission outpatient treatment emerged as facilitators of engagement while waiting; poor staffing levels, socioeconomic barriers, and low patient motivation were viewed as barriers. Metrics for pre-admission processes can be helpful for monitoring residential SUD treatment programs. Interviewing program management and staff about drivers of performance metrics can play a complementary role by identifying innovative and other strong practices, as well as high-value targets for quality improvement. Key facilitators of high-performing facilities may offer programs with lower performance useful strategies to improve specific pre-admission processes.
Liu, Chang; Dobson, Jacob; Cawley, Peter
2017-03-01
Permanently installed guided wave monitoring systems are attractive for monitoring large structures. By frequently interrogating the test structure over a long period of time, such systems have the potential to detect defects much earlier than with conventional one-off inspection, and reduce the time and labour cost involved. However, for the systems to be accepted under real operational conditions, their damage detection performance needs to be evaluated in these practical settings. The receiver operating characteristic (ROC) is an established performance metric for one-off inspections, but the generation of the ROC requires many test structures with realistic damage growth at different locations and different environmental conditions, and this is often impractical. In this paper, we propose an evaluation framework using experimental data collected over multiple environmental cycles on an undamaged structure with synthetic damage signatures added by superposition. Recent advances in computation power enable examples covering a wide range of practical scenarios to be generated, and for multiple cases of each scenario to be tested so that the statistics of the performance can be evaluated. The proposed methodology has been demonstrated using data collected from a laboratory pipe specimen over many temperature cycles, superposed with damage signatures predicted for a flat-bottom hole growing at different rates at various locations. Three damage detection schemes, conventional baseline subtraction, singular value decomposition (SVD) and independent component analysis (ICA), have been evaluated. It has been shown that in all cases, the component methods perform significantly better than the residual method, with ICA generally the better of the two. The results have been validated using experimental data monitoring a pipe in which a flat-bottom hole was drilled and enlarged over successive temperature cycles. The methodology can be used to evaluate the performance of an installed monitoring system and to show whether it is capable of detecting particular damage growth at any given location. It will enable monitoring results to be evaluated rigorously and will be valuable in the development of safety cases.
Dobson, Jacob; Cawley, Peter
2017-01-01
Permanently installed guided wave monitoring systems are attractive for monitoring large structures. By frequently interrogating the test structure over a long period of time, such systems have the potential to detect defects much earlier than with conventional one-off inspection, and reduce the time and labour cost involved. However, for the systems to be accepted under real operational conditions, their damage detection performance needs to be evaluated in these practical settings. The receiver operating characteristic (ROC) is an established performance metric for one-off inspections, but the generation of the ROC requires many test structures with realistic damage growth at different locations and different environmental conditions, and this is often impractical. In this paper, we propose an evaluation framework using experimental data collected over multiple environmental cycles on an undamaged structure with synthetic damage signatures added by superposition. Recent advances in computation power enable examples covering a wide range of practical scenarios to be generated, and for multiple cases of each scenario to be tested so that the statistics of the performance can be evaluated. The proposed methodology has been demonstrated using data collected from a laboratory pipe specimen over many temperature cycles, superposed with damage signatures predicted for a flat-bottom hole growing at different rates at various locations. Three damage detection schemes, conventional baseline subtraction, singular value decomposition (SVD) and independent component analysis (ICA), have been evaluated. It has been shown that in all cases, the component methods perform significantly better than the residual method, with ICA generally the better of the two. The results have been validated using experimental data monitoring a pipe in which a flat-bottom hole was drilled and enlarged over successive temperature cycles. The methodology can be used to evaluate the performance of an installed monitoring system and to show whether it is capable of detecting particular damage growth at any given location. It will enable monitoring results to be evaluated rigorously and will be valuable in the development of safety cases. PMID:28413339
A computational imaging target specific detectivity metric
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Nehmetallah, George
2017-05-01
Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.
Effects of Solar Geoengineering on Vegetation: Implications for Biodiversity and Conservation
NASA Astrophysics Data System (ADS)
Dagon, K.; Schrag, D. P.
2017-12-01
Climate change will have significant impacts on vegetation and biodiversity. Solar geoengineering has potential to reduce the climate effects of greenhouse gas emissions through albedo modification, yet more research is needed to better understand how these techniques might impact terrestrial ecosystems. Here we utilize the fully coupled version of the Community Earth System Model to run transient solar geoengineering simulations designed to stabilize radiative forcing starting mid-century, relative to the Representative Concentration Pathway 6 (RCP6) scenario. Using results from 100-year simulations, we analyze model output through the lens of ecosystem-relevant metrics. We find that solar geoengineering improves the conservation outlook under climate change, but there are still potential impacts on biodiversity. Two commonly used climate classification systems show shifts in vegetation under solar geoengineering relative to RCP6, though we acknowledge the associated uncertainties with these systems. We also show that rates of warming and the climate velocity are minimized globally under solar geoengineering by the end of the century, while trends persist over land in the Northern Hemisphere. Shifts in the amplitude of temperature and precipitation seasonal cycles are observed in the results, and have implications for vegetation phenology. Different metrics for vegetation productivity also show decreases under solar geoengineering relative to RCP6, but could be related to the model parameterization of nutrient cycling. Vegetation water cycling is found to be an important mechanism for understanding changes in ecosystems under solar geoengineering.
Analysis of Subjects' Vulnerability in a Touch Screen Game Using Behavioral Metrics.
Parsinejad, Payam; Sipahi, Rifat
2017-12-01
In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.
Designing a Robust Micromixer Based on Fluid Stretching
NASA Astrophysics Data System (ADS)
Mott, David; Gautam, Dipesh; Voth, Greg; Oran, Elaine
2010-11-01
A metric for measuring fluid stretching based on finite-time Lyapunov exponents is described, and the use of this metric for optimizing mixing in microfluidic components is explored. The metric is implemented within an automated design approach called the Computational Toolbox (CTB). The CTB designs components by adding geometric features, such a grooves of various shapes, to a microchannel. The transport produced by each of these features in isolation was pre-computed and stored as an "advection map" for that feature, and the flow through a composite geometry that combines these features is calculated rapidly by applying the corresponding maps in sequence. A genetic algorithm search then chooses the feature combination that optimizes a user-specified metric. Metrics based on the variance of concentration generally require the user to specify the fluid distributions at inflow, which leads to different mixer designs for different inflow arrangements. The stretching metric is independent of the fluid arrangement at inflow. Mixers designed using the stretching metric are compared to those designed using a variance of concentration metric and show excellent performance across a variety of inflow distributions and diffusivities.
An objective method for a video quality evaluation in a 3DTV service
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2015-09-01
The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.
Overview of the U.S. DOE Accident Tolerant Fuel Development Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon Carmack; Frank Goldner; Shannon M. Bragg-Sitton
2013-09-01
The United States Fuel Cycle Research and Development Advanced Fuels Campaign has been given the responsibility to conduct research and development on enhanced accident tolerant fuels with the goal of performing a lead test assembly or lead test rod irradiation in a commercial reactor by 2022. The Advanced Fuels Campaign has defined fuels with enhanced accident tolerance as those that, in comparison with the standard UO2-Zircaloy system currently used by the nuclear industry, can tolerate loss of active cooling in the reactor core for a considerably longer time period (depending on the LWR system and accident scenario) while maintaining ormore » improving the fuel performance during normal operations and operational transients, as well as design-basis and beyond design-basis events. This paper provides an overview of the FCRD Accident Tolerant Fuel program. The ATF attributes will be presented and discussed. Attributes identified as potentially important to enhance accident tolerance include reduced hydrogen generation (resulting from cladding oxidation), enhanced fission product retention under severe accident conditions, reduced cladding reaction with high-temperature steam, and improved fuel-cladding interaction for enhanced performance under extreme conditions. To demonstrate the enhanced accident tolerance of candidate fuel designs, metrics must be developed and evaluated using a combination of design features for a given LWR design, potential improvements to that design, and the design of an advanced fuel/cladding system. The aforementioned attributes provide qualitative guidance for parameters that will be considered for fuels with enhanced accident tolerance. It may be unnecessary to improve in all attributes and it is likely that some attributes or combination of attributes provide meaningful gains in accident tolerance, while others may provide only marginal benefits. Thus, an initial step in program implementation will be the development of quantitative metrics. A companion paper in these proceedings provides an update on the status of establishing these quantitative metrics for accident tolerant LWR fuel.1 The United States FCRD Advanced Fuels Campaign has embarked on an aggressive schedule for development of enhanced accident tolerant LWR fuels. The goal of developing such a fuel system that can be deployed in the U.S. LWR fleet in the next 10 to 20 years supports the sustainability of clean nuclear power generation in the United States.« less
Important LiDAR metrics for discriminating forest tree species in Central Europe
NASA Astrophysics Data System (ADS)
Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco
2018-03-01
Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.
ERIC Educational Resources Information Center
Colyvas, Jeannette A.
2012-01-01
Our current educational environment is subject to persistent calls for accountability, evidence-based practice, and data use for improvement, which largely take the form of performance metrics (PMs). This rapid proliferation of PMs has profoundly influenced the ways in which scholars and practitioners think about their own practices and the larger…
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Resilient Control Systems Practical Metrics Basis for Defining Mission Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig G. Rieger
"Resilience” describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish “proper operation” and “impact.” A notional chemical processing plant is the use case for demonstrating how the system integritymore » metrics can be applied to establish performance, and« less
Variational and robust density fitting of four-center two-electron integrals in local metrics
NASA Astrophysics Data System (ADS)
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł
2008-09-01
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
Variational and robust density fitting of four-center two-electron integrals in local metrics.
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł
2008-09-14
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
The Development of Vocational Vehicle Drive Cycles and Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Adam W.; Phillips, Caleb T.; Konan, Arnaud M.
Under a collaborative interagency agreement between the U.S. Environmental Protection Agency and the U.S Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) performed a series of in-depth analyses to characterize the on-road driving behavior including distributions of vehicle speed, idle time, accelerations and decelerations, and other driving metrics of medium- and heavy-duty vocational vehicles operating within the United States. As part of this effort, NREL researchers segmented U.S. medium- and heavy-duty vocational vehicle driving characteristics into three distinct operating groups or clusters using real world drive cycle data collected at 1 Hz and stored in NREL's Fleet DNAmore » database. The Fleet DNA database contains millions of miles of historical real-world drive cycle data captured from medium- and heavy vehicles operating across the United States. The data encompass data from existing DOE activities as well as contributions from valued industry stakeholder participants. For this project, data captured from 913 unique vehicles comprising 16,250 days of operation were drawn from the Fleet DNA database and examined. The Fleet DNA data used as a source for this analysis has been collected from a total of 30 unique fleets/data providers operating across 22 unique geographic locations spread across the United States. This includes locations with topology ranging from the foothills of Denver, Colorado, to the flats of Miami, Florida. The range of fleets, geographic locations, and total number of vehicles analyzed ensures results that include the influence of these factors. While no analysis will be perfect without unlimited resources and data, it is the researchers understanding that the Fleet DNA database is the largest and most thorough publicly accessible vocational vehicle usage database currently in operation. This report includes an introduction to the Fleet DNA database and the data contained within, a presentation of the results of the statistical analysis performed by NREL, review of the logistic model developed to predict cluster membership, and a discussion and detailed summary of the development of the vocational drive cycle weights and representative transient drive cycles for testing and simulation. Additional discussion of known limitations and potential future work are also included in the report content.« less
NASA Astrophysics Data System (ADS)
Trimborn, Barbara; Wolf, Ivo; Abu-Sammour, Denis; Henzler, Thomas; Schad, Lothar R.; Zöllner, Frank G.
2017-03-01
Image registration of preprocedural contrast-enhanced CTs to intraprocedual cone-beam computed tomography (CBCT) can provide additional information for interventional liver oncology procedures such as transcatheter arterial chemoembolisation (TACE). In this paper, a novel similarity metric for gradient-based image registration is proposed. The metric relies on the patch-based computation of histograms of oriented gradients (HOG) building the basis for a feature descriptor. The metric was implemented in a framework for rigid 3D-3D-registration of pre-interventional CT with intra-interventional CBCT data obtained during the workflow of a TACE. To evaluate the performance of the new metric, the capture range was estimated based on the calculation of the mean target registration error and compared to the results obtained with a normalized cross correlation metric. The results show that 3D HOG feature descriptors are suitable as image-similarity metric and that the novel metric can compete with established methods in terms of registration accuracy
Resilience Metrics for the Electric Power System: A Performance-Based Approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto
Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.