Rule groupings in expert systems using nearest neighbour decision rules, and convex hulls
NASA Technical Reports Server (NTRS)
Anastasiadis, Stergios
1991-01-01
Expert System shells are lacking in many areas of software engineering. Large rule based systems are not semantically comprehensible, difficult to debug, and impossible to modify or validate. Partitioning a set of rules found in CLIPS (C Language Integrated Production System) into groups of rules which reflect the underlying semantic subdomains of the problem, will address adequately the concerns stated above. Techniques are introduced to structure a CLIPS rule base into groups of rules that inherently have common semantic information. The concepts involved are imported from the field of A.I., Pattern Recognition, and Statistical Inference. Techniques focus on the areas of feature selection, classification, and a criteria of how 'good' the classification technique is, based on Bayesian Decision Theory. A variety of distance metrics are discussed for measuring the 'closeness' of CLIPS rules and various Nearest Neighbor classification algorithms are described based on the above metric.
Equations for Scoring Rules When Data Are Missing
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document presents equations for scoring rules in a diagnostic and/or prognostic artificial-intelligence software system of the rule-based inference-engine type. The equations define a set of metrics that characterize the evaluation of a rule when data required for the antecedence clause(s) of the rule are missing. The metrics include a primary measure denoted the rule completeness metric (RCM) plus a number of subsidiary measures that contribute to the RCM. The RCM is derived from an analysis of a rule with respect to its truth and a measure of the completeness of its input data. The derivation is such that the truth value of an antecedent is independent of the measure of its completeness. The RCM can be used to compare the degree of completeness of two or more rules with respect to a given set of data. Hence, the RCM can be used as a guide to choosing among rules during the rule-selection phase of operation of the artificial-intelligence system..
A rule-based software test data generator
NASA Technical Reports Server (NTRS)
Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II
1991-01-01
Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.
The Death of Socrates: Managerialism, Metrics and Bureaucratisation in Universities
ERIC Educational Resources Information Center
Orr, Yancey; Orr, Raymond
2016-01-01
Neoliberalism exults the ability of unregulated markets to optimise human relations. Yet, as David Graeber has recently illustrated, it is paradoxically built on rigorous systems of rules, metrics and managers. The potential transition to a market-based tuition and research-funding model for higher education in Australia has, not surprisingly,…
Rule groupings: A software engineering approach towards verification of expert systems
NASA Technical Reports Server (NTRS)
Mehrotra, Mala
1991-01-01
Currently, most expert system shells do not address software engineering issues for developing or maintaining expert systems. As a result, large expert systems tend to be incomprehensible, difficult to debug or modify and almost impossible to verify or validate. Partitioning rule based systems into rule groups which reflect the underlying subdomains of the problem should enhance the comprehensibility, maintainability, and reliability of expert system software. Attempts were made to semiautomatically structure a CLIPS rule base into groups of related rules that carry the same type of information. Different distance metrics that capture relevant information from the rules for grouping are discussed. Two clustering algorithms that partition the rule base into groups of related rules are given. Two independent evaluation criteria are developed to measure the effectiveness of the grouping strategies. Results of the experiment with three sample rule bases are presented.
An Account of Old English Stress.
ERIC Educational Resources Information Center
McCully, C. B.; Hogg, R. M.
1990-01-01
An analysis of stress patterns in Old English, from the perspective of a framework based on lexicalist metrical phonology, indicates that there was a central Old English stress rule that operated from left-to-right, in contrast to to the central rule for present day English. (46 references) (Author/CB)
Development of the Expert System Domain Advisor and Analysis Tool
1991-09-01
analysis. Typical of the current methods in use at this time is the " tarot metric". This method defines a decision rule whose output is whether to go...B - TAROT METRIC B. ::TTRODUCTION The system chart of ESEM, Figure 1, shows the following three risk-based decision points: i. At prolect initiation...34 decisions. B-I 201 PRELIMINARY T" B-I. Evaluais Factan for ES Deyelopsineg FACTORS POSSIBLE VALUE RATINGS TAROT metric (overall suitability) Poor, Fair
Metric learning for automatic sleep stage classification.
Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung
2013-01-01
We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
Toti, Giulia; Vilalta, Ricardo; Lindner, Peggy; Lefer, Barry; Macias, Charles; Price, Daniel
2016-11-01
Traditional studies on effects of outdoor pollution on asthma have been criticized for questionable statistical validity and inefficacy in exploring the effects of multiple air pollutants, alone and in combination. Association rule mining (ARM), a method easily interpretable and suitable for the analysis of the effects of multiple exposures, could be of use, but the traditional interest metrics of support and confidence need to be substituted with metrics that focus on risk variations caused by different exposures. We present an ARM-based methodology that produces rules associated with relevant odds ratios and limits the number of final rules even at very low support levels (0.5%), thanks to post-pruning criteria that limit rule redundancy and control for statistical significance. The methodology has been applied to a case-crossover study to explore the effects of multiple air pollutants on risk of asthma in pediatric subjects. We identified 27 rules with interesting odds ratio among more than 10,000 having the required support. The only rule including only one chemical is exposure to ozone on the previous day of the reported asthma attack (OR=1.14). 26 combinatory rules highlight the limitations of air quality policies based on single pollutant thresholds and suggest that exposure to mixtures of chemicals is more harmful, with odds ratio as high as 1.54 (associated with the combination day0 SO 2 , day0 NO, day0 NO 2 , day1 PM). The proposed method can be used to analyze risk variations caused by single and multiple exposures. The method is reliable and requires fewer assumptions on the data than parametric approaches. Rules including more than one pollutant highlight interactions that deserve further investigation, while helping to limit the search field. Copyright © 2016 Elsevier B.V. All rights reserved.
ARROWSMITH-P: A prototype expert system for software engineering management
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Ramsey, Connie Loggia
1985-01-01
Although the field of software engineering is relatively new, it can benefit from the use of expert systems. Two prototype expert systems were developed to aid in software engineering management. Given the values for certain metrics, these systems will provide interpretations which explain any abnormal patterns of these values during the development of a software project. The two systems, which solve the same problem, were built using different methods, rule-based deduction and frame-based abduction. A comparison was done to see which method was better suited to the needs of this field. It was found that both systems performed moderately well, but the rule-based deduction system using simple rules provided more complete solutions than did the frame-based abduction system.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW, were designed to automate functions and decisions associated with a combat aircraft's subsystem. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base, and to assess the cooperation between the rule-bases. Each AUTOCREW subsystem is composed of several expert systems that perform specific tasks. AUTOCREW's NAVIGATOR was analyzed in detail to understand the difficulties involved in designing the system and to identify tools and methodologies that ease development. The NAVIGATOR determines optimal navigation strategies from a set of available sensors. A Navigation Sensor Management (NSM) expert system was systematically designed from Kalman filter covariance data; four ground-based, a satellite-based, and two on-board INS-aiding sensors were modeled and simulated to aid an INS. The NSM Expert was developed using the Analysis of Variance (ANOVA) and the ID3 algorithm. Navigation strategy selection is based on an RSS position error decision metric, which is computed from the covariance data. Results show that the NSM Expert predicts position error correctly between 45 and 100 percent of the time for a specified navaid configuration and aircraft trajectory. The NSM Expert adapts to new situations, and provides reasonable estimates of hybrid performance. The systematic nature of the ANOVA/ID3 method makes it broadly applicable to expert system design when experimental or simulation data is available.
Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Yu, Kai; Shortreed, Susan M.; Pronk, Anjoeka; Stewart, Patricia A.; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Silverman, Debra T.; Friesen, Melissa C.
2014-01-01
Objectives Evaluating occupational exposures in population-based case-control studies often requires exposure assessors to review each study participants' reported occupational information job-by-job to derive exposure estimates. Although such assessments likely have underlying decision rules, they usually lack transparency, are time-consuming and have uncertain reliability and validity. We aimed to identify the underlying rules to enable documentation, review, and future use of these expert-based exposure decisions. Methods Classification and regression trees (CART, predictions from a single tree) and random forests (predictions from many trees) were used to identify the underlying rules from the questionnaire responses and an expert's exposure assignments for occupational diesel exhaust exposure for several metrics: binary exposure probability and ordinal exposure probability, intensity, and frequency. Data were split into training (n=10,488 jobs), testing (n=2,247), and validation (n=2,248) data sets. Results The CART and random forest models' predictions agreed with 92–94% of the expert's binary probability assignments. For ordinal probability, intensity, and frequency metrics, the two models extracted decision rules more successfully for unexposed and highly exposed jobs (86–90% and 57–85%, respectively) than for low or medium exposed jobs (7–71%). Conclusions CART and random forest models extracted decision rules and accurately predicted an expert's exposure decisions for the majority of jobs and identified questionnaire response patterns that would require further expert review if the rules were applied to other jobs in the same or different study. This approach makes the exposure assessment process in case-control studies more transparent and creates a mechanism to efficiently replicate exposure decisions in future studies. PMID:23155187
A Rule Based Approach to ISS Interior Volume Control and Layout
NASA Technical Reports Server (NTRS)
Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan
2001-01-01
Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.
Recommended metric for tracking visibility progress in the Regional Haze Rule.
Gantt, Brett; Beaver, Melinda; Timin, Brian; Lorang, Phil
2018-05-01
For many national parks and wilderness areas with special air quality protections (Class I areas) in the western United States (U.S.), wildfire smoke and dust events can have a large impact on visibility. The U.S. Environmental Protection Agency's (EPA) 1999 Regional Haze Rule used the 20% haziest days to track visibility changes over time even if they are dominated by smoke or dust. Visibility on the 20% haziest days has remained constant or degraded over the last 16 yr at some Class I areas despite widespread emission reductions from anthropogenic sources. To better track visibility changes specifically associated with anthropogenic pollution sources rather than natural sources, the EPA has revised the Regional Haze Rule to track visibility on the 20% most anthropogenically impaired (hereafter, most impaired) days rather than the haziest days. To support the implementation of this revised requirement, the EPA has proposed (but not finalized) a recommended metric for characterizing the anthropogenic and natural portions of the daily extinction budget at each site. This metric selects the 20% most impaired days based on these portions using a "delta deciview" approach to quantify the deciview scale impact of anthropogenic light extinction. Using this metric, sulfate and nitrate make up the majority of the anthropogenic extinction in 2015 on these days, with natural extinction largely made up of organic carbon mass in the eastern U.S. and a combination of organic carbon mass, dust components, and sea salt in the western U.S. For sites in the western U.S., the seasonality of days selected as the 20% most impaired is different than the seasonality of the 20% haziest days, with many more winter and spring days selected. Applying this new metric to the 2000-2015 period across sites representing Class I areas results in substantial changes in the calculated visibility trend for the northern Rockies and southwest U.S., but little change for the eastern U.S. Changing the approach for tracking visibility in the Regional Haze Rule allows the EPA, states, and the public to track visibility on days when reductions in anthropogenic emissions have the greatest potential to improve the view. The calculations involved with the recommended metric can be incorporated into the routine IMPROVE (Interagency Monitoring of Protected Visual Environments) data processing, enabling rapid analysis of current and future visibility trends. Natural visibility conditions are important in the calculations for the recommended metric, necessitating the need for additional analysis and potential refinement of their values.
Automated visualization of rule-based models
Tapia, Jose-Juan; Faeder, James R.
2017-01-01
Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816
Analysis of Subjects' Vulnerability in a Touch Screen Game Using Behavioral Metrics.
Parsinejad, Payam; Sipahi, Rifat
2017-12-01
In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.
Designing Industrial Networks Using Ecological Food Web Metrics.
Layton, Astrid; Bras, Bert; Weissburg, Marc
2016-10-18
Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.
Pronk, Anjoeka; Stewart, Patricia A; Coble, Joseph B; Katki, Hormuzd A; Wheeler, David C; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Johnson, Alison; Waddell, Richard; Verrill, Castine; Cherala, Sai; Silverman, Debra T; Friesen, Melissa C
2012-10-01
Professional judgment is necessary to assess occupational exposure in population-based case-control studies; however, the assessments lack transparency and are time-consuming to perform. To improve transparency and efficiency, we systematically applied decision rules to questionnaire responses to assess diesel exhaust exposure in the population-based case-control New England Bladder Cancer Study. 2631 participants reported 14 983 jobs; 2749 jobs were administered questionnaires ('modules') with diesel-relevant questions. We applied decision rules to assign exposure metrics based either on the occupational history (OH) responses (OH estimates) or on the module responses (module estimates); we then combined the separate OH and module estimates (OH/module estimates). Each job was also reviewed individually to assign exposure (one-by-one review estimates). We evaluated the agreement between the OH, OH/module and one-by-one review estimates. The proportion of exposed jobs was 20-25% for all jobs, depending on approach, and 54-60% for jobs with diesel-relevant modules. The OH/module and one-by-one review estimates had moderately high agreement for all jobs (κ(w)=0.68-0.81) and for jobs with diesel-relevant modules (κ(w)=0.62-0.78) for the probability, intensity and frequency metrics. For exposed subjects, the Spearman correlation statistic was 0.72 between the cumulative OH/module and one-by-one review estimates. The agreement seen here may represent an upper level of agreement because the algorithm and one-by-one review estimates were not fully independent. This study shows that applying decision-based rules can reproduce a one-by-one review, increase transparency and efficiency, and provide a mechanism to replicate exposure decisions in other studies.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
Montovan, Kathryn J; Karst, Nathaniel; Jones, Laura E; Seeley, Thomas D
2013-11-07
In the beeswax combs of honey bees, the cells of brood, pollen, and honey have a consistent spatial pattern that is sustained throughout the life of a colony. This spatial pattern is believed to emerge from simple behavioral rules that specify how the queen moves, where foragers deposit honey/pollen and how honey/pollen is consumed from cells. Prior work has shown that a set of such rules can explain the formation of the allocation pattern starting from an empty comb. We show that these rules cannot maintain the pattern once the brood start to vacate their cells, and we propose new, biologically realistic rules that better sustain the observed allocation pattern. We analyze the three resulting models by performing hundreds of simulation runs over many gestational periods and a wide range of parameter values. We develop new metrics for pattern assessment and employ them in analyzing pattern retention over each simulation run. Applied to our simulation results, these metrics show alteration of an accepted model for honey/pollen consumption based on local information can stabilize the cell allocation pattern over time. We also show that adding global information, by biasing the queen's movements towards the center of the comb, expands the parameter regime over which pattern retention occurs. © 2013 Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bornmann, Lutz; Haunschild, Robin
2017-01-01
Bibliometrics is successful in measuring impact because the target is clearly defined: the publishing scientist who is still active and working. Thus, citations are a target-oriented metric which measures impact on science. In contrast, societal impact measurements based on altmetrics are as a rule intended to measure impact in a broad sense on…
Determination of a Screening Metric for High Diversity DNA Libraries.
Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A
2016-01-01
The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Allometry of sexual size dimorphism in turtles: a comparison of mass and length data.
Regis, Koy W; Meik, Jesse M
2017-01-01
The macroevolutionary pattern of Rensch's Rule (positive allometry of sexual size dimorphism) has had mixed support in turtles. Using the largest carapace length dataset and only large-scale body mass dataset assembled for this group, we determine (a) whether turtles conform to Rensch's Rule at the order, suborder, and family levels, and (b) whether inferences regarding allometry of sexual size dimorphism differ based on choice of body size metric used for analyses. We compiled databases of mean body mass and carapace length for males and females for as many populations and species of turtles as possible. We then determined scaling relationships between males and females for average body mass and straight carapace length using traditional and phylogenetic comparative methods. We also used regression analyses to evalutate sex-specific differences in the variance explained by carapace length on body mass. Using traditional (non-phylogenetic) analyses, body mass supports Rensch's Rule, whereas straight carapace length supports isometry. Using phylogenetic independent contrasts, both body mass and straight carapace length support Rensch's Rule with strong congruence between metrics. At the family level, support for Rensch's Rule is more frequent when mass is used and in phylogenetic comparative analyses. Turtles do not differ in slopes of sex-specific mass-to-length regressions and more variance in body size within each sex is explained by mass than by carapace length. Turtles display Rensch's Rule overall and within families of Cryptodires, but not within Pleurodire families. Mass and length are strongly congruent with respect to Rensch's Rule across turtles, and discrepancies are observed mostly at the family level (the level where Rensch's Rule is most often evaluated). At macroevolutionary scales, the purported advantages of length measurements over weight are not supported in turtles.
Standardized reporting of functioning information on ICF-based common metrics.
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold
2018-02-01
In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians and researchers to continue using their tools while still being able to compare and aggregate the information within and across tools.
Pollinator Protection Plans Metrics PPDC Workgroup August 10, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup October 31, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup May 2, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup October 12, 2016
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup February 15, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup June 22, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup October 11, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup January 18, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup July 27, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup September 13, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup April 13, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pollinator Protection Plans Metrics PPDC Workgroup March 15, 2017
This document provides minutes of the PPDC Metrics Workgroup teleconference, including review the charge and goals for the workgroup, review background materials, establish ground rules and operating principles for the group and related discussion.
Pronk, Anjoeka; Stewart, Patricia A.; Coble, Joseph B.; Katki, Hormuzd A.; Wheeler, David C.; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Johnson, Alison; Waddell, Richard; Verrill, Castine; Cherala, Sai; Silverman, Debra T.; Friesen, Melissa C.
2012-01-01
Objectives Professional judgment is necessary to assess occupational exposure in population-based case-control studies; however, the assessments lack transparency and are time-consuming to perform. To improve transparency and efficiency, we systematically applied decision rules to the questionnaire responses to assess diesel exhaust exposure in the New England Bladder Cancer Study, a population-based case-control study. Methods 2,631 participants reported 14,983 jobs; 2,749 jobs were administered questionnaires (‘modules’) with diesel-relevant questions. We applied decision rules to assign exposure metrics based solely on the occupational history responses (OH estimates) and based on the module responses (module estimates); we combined the separate OH and module estimates (OH/module estimates). Each job was also reviewed one at a time to assign exposure (one-by-one review estimates). We evaluated the agreement between the OH, OH/module, and one-by-one review estimates. Results The proportion of exposed jobs was 20–25% for all jobs, depending on approach, and 54–60% for jobs with diesel-relevant modules. The OH/module and one-by-one review had moderately high agreement for all jobs (κw=0.68–0.81) and for jobs with diesel-relevant modules (κw=0.62–0.78) for the probability, intensity, and frequency metrics. For exposed subjects, the Spearman correlation statistic was 0.72 between the cumulative OH/module and one-by-one review estimates. Conclusions The agreement seen here may represent an upper level of agreement because the algorithm and one-by-one review estimates were not fully independent. This study shows that applying decision-based rules can reproduce a one-by-one review, increase transparency and efficiency, and provide a mechanism to replicate exposure decisions in other studies. PMID:22843440
Sakieh, Yousef; Salmanmahiny, Abdolrassoul
2016-03-01
Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.
Zhang, Jie; Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.
Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683
NASA Technical Reports Server (NTRS)
Neal, Ralph D.
1996-01-01
This paper looks closely at each of the software metrics generated by the McCabe object-Oriented Tool(TM) and its ability to convey timely information to developers. The metrics are examined for meaningfulness in terms of the scale assignable to the metric by the rules of measurement theory and the software dimension being measured. Recommendations are made as to the proper use of each metric and its ability to influence development at an early stage. The metrics of the McCabe Object-Oriented Tool(TM) set were selected because of the tool's use in a couple of NASA IV&V projects.
NASA Astrophysics Data System (ADS)
Chen, Duxin; Xu, Bowen; Zhu, Tao; Zhou, Tao; Zhang, Hai-Tao
2017-08-01
Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Continuous theory of active matter systems with metric-free interactions.
Peshkov, Anton; Ngo, Sandrine; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco
2012-08-31
We derive a hydrodynamic description of metric-free active matter: starting from self-propelled particles aligning with neighbors defined by "topological" rules, not metric zones-a situation advocated recently to be relevant for bird flocks, fish schools, and crowds-we use a kinetic approach to obtain well-controlled nonlinear field equations. We show that the density-independent collision rate per particle characteristic of topological interactions suppresses the linear instability of the homogeneous ordered phase and the nonlinear density segregation generically present near threshold in metric models, in agreement with microscopic simulations.
Failure criterion for materials with spatially correlated mechanical properties
NASA Astrophysics Data System (ADS)
Faillettaz, J.; Or, D.
2015-03-01
The role of spatially correlated mechanical elements in the failure behavior of heterogeneous materials represented by fiber bundle models (FBMs) was evaluated systematically for different load redistribution rules. Increasing the range of spatial correlation for FBMs with local load sharing is marked by a transition from ductilelike failure characteristics into brittlelike failure. The study identified a global failure criterion based on macroscopic properties (external load and cumulative damage) that is independent of spatial correlation or load redistribution rules. This general metric could be applied to assess the mechanical stability of complex and heterogeneous systems and thus provide an important component for early warning of a class of geophysical ruptures.
Vowel Harmony in Palestinian Arabic: A Metrical Perspective.
ERIC Educational Resources Information Center
Abu-Salim, I. M.
1987-01-01
The autosegmental rule of vowel harmony (VH) in Palestinian Arabic is shown to be constrained simultaneously by metrical and segmental boundaries. The indicative prefix bi- is no longer an exception to VH if a structure is assumed that disallows the prefix from sharing a foot with the stem, consequently blocking VH. (Author/LMO)
2011-01-24
Performance Metrics Community Based Medical Homes Slide 8 of 10 2011 MHS Conference Increase our primary care market share Net increase in primary... Sharing Knowledge: Achieving Breakthrough Performance 2011 Military Health System Conference Army Incentives for the PCMH 24 January 2011 Mr. Ken...enroll as soon as fully staffed Operate at economic advantage to DoD Improve ER/ UCC usage rates Improve utilization rates Business Rules Army
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
...: Temporary rule; correction. SUMMARY: NMFS is correcting a temporary rule to adjust the 2010 fishing year (FY... revised Trimester 3 quota. The correct value for the revised Trimester 3 quota of 23,743,619 lb is 10,770... equivalent as 13,770 mt. The corrected metric equivalent is 10,770 mt. Correction In rule FR Doc. 2010-15933...
De la Sen, Manuel; Abbas, Mujahid; Saleem, Naeem
2016-01-01
This paper discusses some convergence properties in fuzzy ordered proximal approaches defined by [Formula: see text]-sequences of pairs, where [Formula: see text] is a surjective self-mapping and [Formula: see text] where Aand Bare nonempty subsets of and abstract nonempty set X and [Formula: see text] is a partially ordered non-Archimedean fuzzy metric space which is endowed with a fuzzy metric M, a triangular norm * and an ordering [Formula: see text] The fuzzy set M takes values in a sequence or set [Formula: see text] where the elements of the so-called switching rule [Formula: see text] are defined from [Formula: see text] to a subset of [Formula: see text] Such a switching rule selects a particular realization of M at the nth iteration and it is parameterized by a growth evolution sequence [Formula: see text] and a sequence or set [Formula: see text] which belongs to the so-called [Formula: see text]-lower-bounding mappings which are defined from [0, 1] to [0, 1]. Some application examples concerning discrete systems under switching rules and best approximation solvability of algebraic equations are discussed.
Iqbal, Sahar; Mustansar, Tazeen
2017-03-01
Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.
Optimal pattern distributions in Rete-based production systems
NASA Technical Reports Server (NTRS)
Scott, Stephen L.
1994-01-01
Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.
More Metric Measurement Concepts. Fundamentals of Occupational Mathematics. Module 10.
ERIC Educational Resources Information Center
Engelbrecht, Nancy; And Others
This module is the 10th in a series of 12 learning modules designed to teach occupational mathematics. Blocks of informative material and rules are followed by examples and practice problems. The solutions to the practice problems are found at the end of the module. Specific topics covered include the metric concepts of mass, weight, and volume…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
... (CPS) Fishery Management Plan (FMP). The 2012 maximum HG for Pacific sardine is 109,409 metric tons (mt... framework in the FMP. This framework includes a harvest control rule that determines the maximum HG, the... 109,409 metric tons (mt) for the 2012 Pacific sardine fishing year. These catch specifications are...
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW which were designed to automate functions and decisions associated with a combat aircraft's subsystems, are discussed. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base and to assess the cooperation between the rule bases. Simulation and comparative workload results for two mission scenarios are given. The scenarios are inbound surface-to-air-missile attack on the aircraft and pilot incapacitation. The methodology used to develop the AUTOCREW knowledge bases is summarized. Issues involved in designing the navigation sensor selection expert in AUTOCREW's NAVIGATOR knowledge base are discussed in detail. The performance of seven navigation systems aiding a medium-accuracy INS was investigated using Kalman filter covariance analyses. A navigation sensor management (NSM) expert system was formulated from covariance simulation data using the analysis of variance (ANOVA) method and the ID3 algorithm. ANOVA results show that statistically different position accuracies are obtained when different navaids are used, the number of navaids aiding the INS is varied, the aircraft's trajectory is varied, and the performance history is varied. The ID3 algorithm determines the NSM expert's classification rules in the form of decision trees. The performance of these decision trees was assessed on two arbitrary trajectories, and the results demonstrate that the NSM expert adapts to new situations and provides reasonable estimates of the expected hybrid performance.
NASA Astrophysics Data System (ADS)
Shelef, Eitan; Hilley, George E.
2013-12-01
Flow routing across real or modeled topography determines the modeled discharge and wetness index and thus plays a central role in predicting surface lowering rate, runoff generation, likelihood of slope failure, and transition from hillslope to channel forming processes. In this contribution, we compare commonly used flow-routing rules as well as a new routing rule, to commonly used benchmarks. We also compare results for different routing rules using Airborne Laser Swath Mapping (ALSM) topography to explore the impact of different flow-routing schemes on inferring the generation of saturation overland flow and the transition between hillslope to channel forming processes, as well as on location of saturation overland flow. Finally, we examined the impact of flow-routing and slope-calculation rules on modeled topography produced by Geomorphic Transport Law (GTL)-based simulations. We found that different rules produce substantive differences in the structure of the modeled topography and flow patterns over ALSM data. Our results highlight the impact of flow-routing and slope-calculation rules on modeled topography, as well as on calculated geomorphic metrics across real landscapes. As such, studies that use a variety of routing rules to analyze and simulate topography are necessary to determine those aspects that most strongly depend on a chosen routing rule.
NASA Astrophysics Data System (ADS)
Adams, L. E.; Lund, J. R.; Moyle, P. B.; Quiñones, R. M.; Herman, J. D.; O'Rear, T. A.
2017-09-01
Building reservoir release schedules to manage engineered river systems can involve costly trade-offs between storing and releasing water. As a result, the design of release schedules requires metrics that quantify the benefit and damages created by releases to the downstream ecosystem. Such metrics should support making operational decisions under uncertain hydrologic conditions, including drought and flood seasons. This study addresses this need and develops a reservoir operation rule structure and method to maximize downstream environmental benefit while meeting human water demands. The result is a general approach for hedging downstream environmental objectives. A multistage stochastic mixed-integer nonlinear program with Markov Chains, identifies optimal "environmental hedging," releases to maximize environmental benefits subject to probabilistic seasonal hydrologic conditions, current, past, and future environmental demand, human water supply needs, infrastructure limitations, population dynamics, drought storage protection, and the river's carrying capacity. Environmental hedging "hedges bets" for drought by reducing releases for fish, sometimes intentionally killing some fish early to reduce the likelihood of large fish kills and storage crises later. This approach is applied to Folsom reservoir in California to support survival of fall-run Chinook salmon in the lower American River for a range of carryover and initial storage cases. Benefit is measured in terms of fish survival; maintaining self-sustaining native fish populations is a significant indicator of ecosystem function. Environmental hedging meets human demand and outperforms other operating rules, including the current Folsom operating strategy, based on metrics of fish extirpation and water supply reliability.
Evaluating the Rule of 10s in Cleft Lip Repair: Do Data Support Dogma?
Chow, Ian; Purnell, Chad A; Hanwright, Philip J; Gosain, Arun K
2016-09-01
Cleft lip represents one of the most common birth defects in the world. Although the timing of cleft lip repair is contingent on a number of factors, the "rule of 10s" remains a frequently quoted safety benchmark. Initially reported by Wilhelmsen and Musgrave in 1966 and modified by Millard in 1976, this rule referred to performing surgery once patients had reached cutoffs in weight, hemoglobin, and age/leukocyte count. Despite significant advances in both surgical and anesthetic technique, the oft-quoted "rule of 10s" has not been systematically investigated since its inception. Patients who underwent primary cleft lip repair were identified from the National Surgical Quality Improvement Program Pediatric database. Multivariate logistic regression models were used to determine the independent effect of each rule of 10 metric or violation of the rule of 10s as a whole on postoperative complications, and to determine independent risk factors for complications in cleft lip surgery. One thousand three hundred thirteen patients met inclusion criteria, with a 3.6 percent complication rate. Of the included patients, 151 (11.5 percent) violated at least one facet of the rule of 10s. Other than patient weight, neither the rule of 10s nor any individual metric was significantly predictive of postoperative complications. Since its introduction nearly a half century ago, the risks associated with performing surgery in patients who violate the rule of 10s has undergone dramatic reductions. This analysis highlights the need to continually validate and evaluate dogma as the field continues to advance. Risk, III.
USDA-ARS?s Scientific Manuscript database
Knowledge of the microbial quality of irrigation waters is extremely limited. For this reason, the US FDA has promulgated the Produce Rule, mandating the testing of irrigation water sources for many farms. The rule requires the collection and analysis of at least 20 water samples over two to four ye...
Is the Lorentz signature of the metric of spacetime electromagnetic in origin?
NASA Astrophysics Data System (ADS)
Itin, Yakov; Hehl, Friedrich W.
2004-07-01
We formulate a premetric version of classical electrodynamics in terms of the excitation H=( H, D) and the field strength F=( E, B). A local, linear, and symmetric spacetime relation between H and F is assumed. It yields, if electric/magnetic reciprocity is postulated, a Lorentzian metric of spacetime thereby excluding Euclidean signature (which is, nevertheless, discussed in some detail). Moreover, we determine the Dufay law (repulsion of like charges and attraction of opposite ones), the Lenz rule (the relative sign in Faraday's law), and the sign of the electromagnetic energy. In this way, we get a systematic understanding of the sign rules and the sign conventions in electrodynamics. The question in the title of the paper is answered affirmatively.
Parent-based diagnosis of ADHD is as accurate as a teacher-based diagnosis of ADHD.
Bied, Adam; Biederman, Joseph; Faraone, Stephen
2017-04-01
To review the literature evaluating the psychometric properties of parent and teacher informants relative to a gold-standard ADHD diagnosis in pediatric populations. We included studies that included both a parent and teacher informant, a gold-standard diagnosis, and diagnostic accuracy metrics. Potential confounds were evaluated. We also assessed the 'OR' and the 'AND' rules for combining informant reports. Eight articles met inclusion criteria. The diagnostic accuracy for predicting gold standard ADHD diagnoses did not differ between parents and teachers. Sample size, sample type, participant drop-out, participant age, participant gender, geographic area of the study, and date of study publication were assessed as potential confounds. Parent and teachers both yielded moderate to good diagnostic accuracy for ADHD diagnoses. Parent reports were statistically indistinguishable from those of teachers. The predictive features of the 'OR' and 'AND' rules are useful in evaluating approaches to better integrating information from these informants.
Alternative Energy for Defense Conference
2011-10-26
Actuated Cooling and Cogeneration Systems Beginning TRL 3/4; End Goal TRL 5 METRICS: COP 0.7, 45 kg/ton US Army CERDEC Applications Portable Power NOW...Provided Power – Not Gov’t owned/operated, commercial grade, capacities vary • Rules of thumb: – 3 kW/person/day (bases with 5 to 3,500 population) – 4 kW...provide CONTINUOUS RATED POWER at these conditions: – 0.8 power factor (pf), lagging – Ambient temperatures up to 52°C (125°F) [-3% for each
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... expire December 9, 2013, unless the sunset clause is removed. NMFS seeks public comment on the Proposed Rule to eliminate the sunset clause and on metrics for assessing the long term costs and benefits of... additional cost to the affected public. Notwithstanding any other provision of the law, no person is required...
What are the Ingredients of a Scientifically and Policy-Relevant Hydrologic Connectivity Metric?
NASA Astrophysics Data System (ADS)
Ali, G.; English, C.; McCullough, G.; Stainton, M.
2014-12-01
While the concept of hydrologic connectivity is of significant importance to both researchers and policy makers, there is no consensus on how to express it in quantitative terms. This lack of consensus was further exacerbated by recent rulings of the U.S. Supreme Court that rely on the idea of "significant nexuses": critical degrees of landscape connectivity now have to be demonstrated to warrant environmental protection under the Clean Water Act. Several indicators of connectivity have been suggested in the literature, but they are often computationally intensive and require soil water content information, a requirement that makes them inapplicable over large, data-poor areas for which management decisions are needed. Here our objective was to assess the extent to which the concept of connectivity could become more operational by: 1) drafting a list of potential, watershed-scale connectivity metrics; 2) establishing a list of criteria for ranking the performance of those metrics; 3) testing them in various landscapes. Our focus was on a dozen agricultural Prairie watersheds where the interaction between near-level topography, perennial and intermittent streams, pothole wetlands and man-made drains renders the estimation of connectivity difficult. A simple procedure was used to convert RADARSAT images, collected between 1997 and 2011, into binary maps of saturated versus non-saturated areas. Several pattern-based and graph-theoretic metrics were then computed for a dynamic assessment of connectivity. The metrics performance was compared with regards to their sensitivity to antecedent precipitation, their correlation with watershed discharge, and their ability to portray aggregation effects. Results show that no single connectivity metric could satisfy all our performance criteria. Graph-theoretic metrics however seemed to perform better in pothole-dominated watersheds, thus highlighting the need for region-specific connectivity assessment frameworks.
Evaluation of Potential Continuation Rules for Mepolizumab Treatment of Severe Eosinophilic Asthma.
Gunsoy, Necdet B; Cockle, Sarah M; Yancey, Steven W; Keene, Oliver N; Bradford, Eric S; Albers, Frank C; Pavord, Ian D
Mepolizumab significantly reduces exacerbations in patients with severe eosinophilic asthma. The early identification of patients likely to receive long-term benefit from treatment could ensure effective resource allocation. To assess potential continuation rules for mepolizumab in addition to initiation criteria defined as 2 or more exacerbations in the previous year and blood eosinophil counts of 150 cells/μL or more at initiation or 300 cells/μL or more in the previous year. This post hoc analysis included data from 2 randomized, double-blind, placebo-controlled studies (NCT01000506 and NCT01691521) of mepolizumab in patients with severe eosinophilic asthma (N = 1,192). Rules based on blood eosinophils, physician-rated response to treatment, FEV 1 , Asthma Control Questionnaire (ACQ-5) score, and exacerbation reduction were assessed at week 16. To assess these rules, 2 key metrics accounting for the effects observed in the placebo arm were developed. Patients not meeting continuation rules based on physician-rated response, FEV 1 , and the ACQ-5 score still derived long-term benefit from mepolizumab. Nearly all patients failing to reduce blood eosinophils had counts of 150 cells/μL or less at baseline. For exacerbations, assessment after 16 weeks was potentially premature for predicting future exacerbations. There was no evidence of a reliable physician-rated response, ACQ-5 score, or lung function-based continuation rule. The added value of changes in blood eosinophils at week 16 over baseline was marginal. Initiation criteria for mepolizumab treatment provide the best method for assessing patient benefit from mepolizumab treatment, and treatment continuation should be reviewed on the basis of a predefined reduction in long-term exacerbation frequency and/or oral corticosteroid dose. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Program Monitoring with LTL in EAGLE
NASA Technical Reports Server (NTRS)
Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2004-01-01
We briefly present a rule-based framework called EAGLE, shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics (MTL), interval logics, forms of quantified temporal logics, and so on. In this paper we focus on a linear temporal logic (LTL) specialization of EAGLE. For an initial formula of size m, we establish upper bounds of O(m(sup 2)2(sup m)log m) and O(m(sup 4)2(sup 2m)log(sup 2) m) for the space and time complexity, respectively, of single step evaluation over an input trace. This bound is close to the lower bound O(2(sup square root m) for future-time LTL presented. EAGLE has been successfully used, in both LTL and metric LTL forms, to test a real-time controller of an experimental NASA planetary rover.
Hong, Eun-Mi; Shelton, Daniel; Pachepsky, Yakov A; Nam, Won-Ho; Coppock, Cary; Muirhead, Richard
2017-02-01
Knowledge of the microbial quality of irrigation waters is extremely limited. For this reason, the US FDA has promulgated the Produce Rule, mandating the testing of irrigation water sources for many farms. The rule requires the collection and analysis of at least 20 water samples over two to four years to adequately evaluate the quality of water intended for produce irrigation. The objective of this work was to evaluate the effect of interannual weather variability on surface water microbial quality. We used the Soil and Water Assessment Tool model to simulate E. coli concentrations in the Little Cove Creek; this is a perennial creek located in an agricultural watershed in south-eastern Pennsylvania. The model performance was evaluated using the US FDA regulatory microbial water quality metrics of geometric mean (GM) and the statistical threshold value (STV). Using the 90-year time series of weather observations, we simulated and randomly sampled the time series of E. coli concentrations. We found that weather conditions of a specific year may strongly affect the evaluation of microbial quality and that the long-term assessment of microbial water quality may be quite different from the evaluation based on short-term observations. The variations in microbial concentrations and water quality metrics were affected by location, wetness of the hydrological years, and seasonality, with 15.7-70.1% of samples exceeding the regulatory threshold. The results of this work demonstrate the value of using modeling to design and evaluate monitoring protocols to assess the microbial quality of water used for produce irrigation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Four rules for taking your message to Wall Street.
Hutton, A
2001-05-01
Managers fail to communicate effectively with Wall Street for all sorts of reasons. But neglecting the investment community--particularly the analysts whose opinions shape the market and whose recommendations often make or break a company's share price--can knock the most carefully conceived and brilliantly executed strategy off course. The companies that struggle the most with providing good information to analysts are those in rapidly evolving industries, where the gap between traditional performance metrics and economic realities is at its widest. In these industries, a company's strategy and the variables that govern its performance can change radically in a short time. What's more, the metrics used to report performance often fail to capture the drivers of value in today's information economy. Few accounting measures are helpful when it comes to assessing the intangible assets--knowledge, skilled employees, and so forth--on which many of today's fastest-growing companies build their strategies. According to Amy Hutton, an associate professor at Harvard Business School, there are four basic rules for clear communications with Wall Street. First, make sure that your company's financial reporting reflects your strategy as closely as possible. Second, popularize the nonfinancial metrics that best predict--and flatter--the performance of your businesses. Third, appoint managers with recognized credibility to your strategic operations. Finally, cultivate the market experts who cover the industries in which you seek to compete. Hutton shows how AOL successfully followed these rules as it significantly changed its strategic direction and competitive arena.
The MiPACQ Clinical Question Answering System
Cairns, Brian L.; Nielsen, Rodney D.; Masanz, James J.; Martin, James H.; Palmer, Martha S.; Ward, Wayne H.; Savova, Guergana K.
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system’s architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation. PMID:22195068
The MiPACQ clinical question answering system.
Cairns, Brian L; Nielsen, Rodney D; Masanz, James J; Martin, James H; Palmer, Martha S; Ward, Wayne H; Savova, Guergana K
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system's architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
NASA Astrophysics Data System (ADS)
Poobalasubramanian, Mangalraj; Agrawal, Anupam
2016-10-01
The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.
EAGLE Monitors by Collecting Facts and Generating Obligations
NASA Technical Reports Server (NTRS)
Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2003-01-01
We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Metric traffic signal design manual
DOT National Transportation Integrated Search
2003-03-01
This manual is for information purposes only and may be used to aid new employees, and those unfamiliar with ODOT Traffic Engineering practices, in accessing and applying applicable standards, statutes, rules, and policies related to railroad preempt...
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.
Synchronization of multi-agent systems with metric-topological interactions.
Wang, Lin; Chen, Guanrong
2016-09-01
A hybrid multi-agent systems model integrating the advantages of both metric interaction and topological interaction rules, called the metric-topological model, is developed. This model describes planar motions of mobile agents, where each agent can interact with all the agents within a circle of a constant radius, and can furthermore interact with some distant agents to reach a pre-assigned number of neighbors, if needed. Some sufficient conditions imposed only on system parameters and agent initial states are presented, which ensure achieving synchronization of the whole group of agents. It reveals the intrinsic relationships among the interaction range, the speed, the initial heading, and the density of the group. Moreover, robustness against variations of interaction range, density, and speed are investigated by comparing the motion patterns and performances of the hybrid metric-topological interaction model with the conventional metric-only and topological-only interaction models. Practically in all cases, the hybrid metric-topological interaction model has the best performance in the sense of achieving highest frequency of synchronization, fastest convergent rate, and smallest heading difference.
When does the future begin? Time metrics matter, connecting present and future selves.
Lewis, Neil A; Oyserman, Daphna
2015-06-01
People assume they should attend to the present; their future self can handle the future. This seemingly plausible rule of thumb can lead people astray, in part because some future events require current action. In order for the future to energize and motivate current action, it must feel imminent. To create this sense of imminence, we manipulated time metric--the units (e.g., days, years) in which time is considered. People interpret accessible time metrics in two ways: If preparation for the future is under way (Studies 1 and 2), people interpret metrics as implying when a future event will occur. If preparation is not under way (Studies 3-5), they interpret metrics as implying when preparation should start (e.g., planning to start saving 4 times sooner for a retirement in 10,950 days instead of 30 years). Time metrics mattered not because they changed how distal or important future events felt (Study 6), but because they changed how connected and congruent their current and future selves felt (Study 7). © The Author(s) 2015.
Induction for Radiology Patients
NASA Astrophysics Data System (ADS)
Yıldırım, Pınar; Tolun, Mehmet R.
This paper represents the implementation of an inductive learning algorithm for patients of Radiology Department in Hacettepe University hospitals to discover the relationship between patient demographics information and time that patients spend during a specific radiology exam. ILA has been used for the implementation which generates rules and the results are evaluated by evaluation metrics. According to generated rules, some patients in different age groups or birthplaces may spend more time for the same radiology exam than the others.
Reasoning and Knowledge Acquisition Framework for 5G Network Analytics
2017-01-01
Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration. PMID:29065473
Reasoning and Knowledge Acquisition Framework for 5G Network Analytics.
Sotelo Monge, Marco Antonio; Maestre Vidal, Jorge; García Villalba, Luis Javier
2017-10-21
Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration.
Tan, W Katherine; Hassanpour, Saeed; Heagerty, Patrick J; Rundell, Sean D; Suri, Pradeep; Huhdanpaa, Hannu T; James, Kathryn; Carrell, David S; Langlotz, Curtis P; Organ, Nancy L; Meier, Eric N; Sherman, Karen J; Kallmes, David F; Luetmer, Patrick H; Griffith, Brent; Nerenz, David R; Jarvik, Jeffrey G
2018-03-28
To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC. Copyright © 2018 The Association of University Radiologists. All rights reserved.
Giraldo, Sergio I; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.
Giraldo, Sergio I.; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. PMID:28066290
The Conforming Brain and Deontological Resolve
Pincus, Melanie; LaViers, Lisa; Prietula, Michael J.; Berns, Gregory
2014-01-01
Our personal values are subject to forces of social influence. Deontological resolve captures how strongly one relies on absolute rules of right and wrong in the representation of one's personal values and may predict willingness to modify one's values in the presence of social influence. Using fMRI, we found that a neurobiological metric for deontological resolve based on relative activity in the ventrolateral prefrontal cortex (VLPFC) during the passive processing of sacred values predicted individual differences in conformity. Individuals with stronger deontological resolve, as measured by greater VLPFC activity, displayed lower levels of conformity. We also tested whether responsiveness to social reward, as measured by ventral striatal activity during social feedback, predicted variability in conformist behavior across individuals but found no significant relationship. From these results we conclude that unwillingness to conform to others' values is associated with a strong neurobiological representation of social rules. PMID:25170989
Chart of conversion factors: From English to metric system and metric to English system
,
1976-01-01
The conversion factors in the following tables are for conversion of our customary (English) units of measurement to SI*units, and for convenience, reciprocals are shown for converting SI units back to the English system. The first table contains rule-of-thumb figures, useful for "getting the feel" of SI units or mental estimation. The succeeding tables contain factors accurate to 3 or more significant figures. Please refer to known reference volumes for additional accuracy, as well as for factors dealing with other scientific notation involving SI units.
NASA Technical Reports Server (NTRS)
Artusa, Elisa A.
1994-01-01
This guide provides information for an understanding of SI units, symbols, and prefixes; style and usage in documentation in both the US and in the international business community; conversion techniques; limits, fits, and tolerance data; and drawing and technical writing guidelines. Also provided is information of SI usage for specialized applications like data processing and computer programming, science, engineering, and construction. Related information in the appendixes include legislative documents, historical and biographical data, a list of metric documentation, rules for determining significant digits and rounding, conversion factors, shorthand notation, and a unit index.
Test of the FLRW Metric and Curvature with Strong Lens Time Delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Kai; Li, Zhengxiang; Wang, Guo-Jian
We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the formermore » test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.« less
The Hubble IR cutoff in holographic ellipsoidal cosmologies
NASA Astrophysics Data System (ADS)
Cataldo, Mauricio; Cruz, Norman
2018-01-01
It is well known that for spatially flat FRW cosmologies, the holographic dark energy disfavors the Hubble parameter as a candidate for the IR cutoff. For overcoming this problem, we explore the use of this cutoff in holographic ellipsoidal cosmological models, and derive the general ellipsoidal metric induced by a such holographic energy density. Despite the drawbacks that this cutoff presents in homogeneous and isotropic universes, based on this general metric, we developed a suitable ellipsoidal holographic cosmological model, filled with a dark matter and a dark energy components. At late time stages, the cosmic evolution is dominated by a holographic anisotropic dark energy with barotropic equations of state. The cosmologies expand in all directions in accelerated manner. Since the ellipsoidal cosmologies given here are not asymptotically FRW, the deviation from homogeneity and isotropy of the universe on large cosmological scales remains constant during all cosmic evolution. This feature allows the studied holographic ellipsoidal cosmologies to be ruled by an equation of state ω =p/ρ , whose range belongs to quintessence or even phantom matter.
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.
Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo
2018-05-10
The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.
Automation Improves Schedule Quality and Increases Scheduling Efficiency for Residents.
Perelstein, Elizabeth; Rose, Ariella; Hong, Young-Chae; Cohn, Amy; Long, Micah T
2016-02-01
Medical resident scheduling is difficult due to multiple rules, competing educational goals, and ever-evolving graduate medical education requirements. Despite this, schedules are typically created manually, consuming hours of work, producing schedules of varying quality, and yielding negative consequences for resident morale and learning. To determine whether computerized decision support can improve the construction of residency schedules, saving time and improving schedule quality. The Optimized Residency Scheduling Assistant was designed by a team from the University of Michigan Department of Industrial and Operations Engineering. It was implemented in the C.S. Mott Children's Hospital Pediatric Emergency Department in the 2012-2013 academic year. The 4 metrics of schedule quality that were compared between the 2010-2011 and 2012-2013 academic years were the incidence of challenging shift transitions, the incidence of shifts following continuity clinics, the total shift inequity, and the night shift inequity. All scheduling rules were successfully incorporated. Average schedule creation time fell from 22 to 28 hours to 4 to 6 hours per month, and 3 of 4 metrics of schedule quality significantly improved. For the implementation year, the incidence of challenging shift transitions decreased from 83 to 14 (P < .01); the incidence of postclinic shifts decreased from 72 to 32 (P < .01); and the SD of night shifts dropped by 55.6% (P < .01). This automated shift scheduling system improves the current manual scheduling process, reducing time spent and improving schedule quality. Embracing such automated tools can benefit residency programs with shift-based scheduling needs.
Ozone (O3) Standards - Other Technical Documents from the Review Completed in 2015
These memoranda were each sent in to the Ozone NAAQS Review Docket, EPA-HQ-OAR-2008-0699, after the proposed rule was published. They present technical data on the methods, monitoring stations, and metrics used to estimate ozone concentrations.
Towards a Framework for Evaluating and Comparing Diagnosis Algorithms
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander
2009-01-01
Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.
Testing general relativity's no-hair theorem with x-ray observations of black holes
NASA Astrophysics Data System (ADS)
Hoormann, Janie K.; Beheshtipour, Banafsheh; Krawczynski, Henric
2016-02-01
Despite its success in the weak gravity regime, general relativity (GR) has yet to be verified in the regime of strong gravity. In this paper, we present the results of detailed ray-tracing simulations aiming at clarifying if the combined information from x-ray spectroscopy, timing, and polarization observations of stellar mass and supermassive black holes can be used to test GR's no-hair theorem. The latter states that stationary astrophysical black holes are described by the Kerr family of metrics, with the black hole mass and spin being the only free parameters. We use four "non-Kerr metrics," some phenomenological in nature and others motivated by alternative theories of gravity, and study the observational signatures of deviations from the Kerr metric. Particular attention is given to the case when all the metrics are set to give the same innermost stable circular orbit in quasi-Boyer-Lindquist coordinates. We give a detailed discussion of similarities and differences of the observational signatures predicted for black holes in the Kerr metric and the non-Kerr metrics. We emphasize that even though some regions of the parameter space are nearly degenerate even when combining the information from all observational channels, x-ray observations of very rapidly spinning black holes can be used to exclude large regions of the parameter space of the alternative metrics. Although it proves difficult to distinguish between the Kerr and non-Kerr metrics for some portions of the parameter space, the observations of very rapidly spinning black holes like Cyg X-1 can be used to rule out large regions for several black hole metrics.
1980-12-05
classification procedures that are common in speech processing. The anesthesia level classification by EEG time series population screening problem example is in...formance. The use of the KL number type metric in NN rule classification, in a delete-one subj ect ’s EE-at-a-time KL-NN and KL- kNN classification of the...17 individual labeled EEG sample population using KL-NN and KL- kNN rules. The results obtained are shown in Table 1. The entries in the table indicate
Steven R. Beissinger; Curtis H. Flather; Gregory D. Hayward; Philip A. Stephens
2011-01-01
Clements et al. (Front Ecol Environ 2011; 9[9]: 521-525) proposed a single metric that describes a "species' ability to forestall extinction" (referred to by the acronym "SAFE") as a "scientifically defendable rule of thumb for when complete demographic data are unavailable" to rank the relative threat status of a species. SAFE is...
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
Irwin, Brian J.; Conroy, Michael J.
2013-01-01
The success of natural resource management depends on monitoring, assessment and enforcement. In support of these efforts, reference points (RPs) are often viewed as critical values of management-relevant indicators. This paper considers RPs from the standpoint of objective-driven decision making in dynamic resource systems, guided by principles of structured decision making (SDM) and adaptive resource management (AM). During the development of natural resource policy, RPs have been variously treated as either ‘targets’ or ‘triggers’. Under a SDM/AM paradigm, target RPs correspond approximately to value-based objectives, which may in turn be either of fundamental interest to stakeholders or intermediaries to other central objectives. By contrast, trigger RPs correspond to decision rules that are presumed to lead to desirable outcomes (such as the programme targets). Casting RPs as triggers or targets within a SDM framework is helpful towards clarifying why (or whether) a particular metric is appropriate. Further, the benefits of a SDM/AM process include elucidation of underlying untested assumptions that may reveal alternative metrics for use as RPs. Likewise, a structured decision-analytic framework may also reveal that failure to achieve management goals is not because the metrics are wrong, but because the decision-making process in which they are embedded is insufficiently robust to uncertainty, is not efficiently directed at producing a resource objective, or is incapable of adaptation to new knowledge.
MACRA: A New Age for Physician Payments.
Huston, Kent Kwasind
2017-04-01
The Medicare Access and CHIP Reauthorization Act (MACRA) of 2015 introduced a new system of physician payments in the United States. This legislation and the complex rules written to enact the law intend to force a shift away from volume-based payments and into so called value-based payments. Physicians and other clinicians will be graded via quality and cost metrics and payments will be adjusted based on performance. Robust use of certified electronic health records is required under MACRA. Physicians will follow one of two payment reform tracks known as the Merit-Based Incentive Payment System (MIPS) and the Alternative Payment Model (APM) pathways. Although there are rheumatology and other specialty specific quality measures in the MIPS program, there are no rheumatology specific APMs to date. A thorough understating of MACRA is required for medical practices to survive the new era of payment reform.
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Impact of OSHA final rule--recording hearing loss: an analysis of an industrial audiometric dataset.
Rabinowitz, Peter M; Slade, Martin; Dixon-Ernst, Christine; Sircar, Kanta; Cullen, Mark
2003-12-01
The 2003 Occupational Safety and Health Administration (OSHA) Occupational Injury and Illness Recording and Reporting Final Rule changed the definition of recordable work-related hearing loss. We performed a study of the Alcoa Inc. audiometric database to evaluate the impact of this new rule. The 2003 rule increased the rate of potentially recordable hearing loss events from 0.2% to 1.6% per year. A total of 68.6% of potentially recordable cases had American Academy of Audiology/American Medical Association (AAO/AMA) hearing impairment at the time of recordability. On average, recordable loss occurred after onset of impairment, whereas the non-age-corrected 10-dB standard threshold shift (STS) usually preceded impairment. The OSHA Final Rule will significantly increase recordable cases of occupational hearing loss. The new case definition is usually accompanied by AAO/AMA hearing impairment. Other, more sensitive metrics should therefore be used for early detection and prevention of hearing loss.
Friesen, Melissa C.; Shortreed, Susan M.; Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla R.; Silverman, Debra T.; Yu, Kai
2015-01-01
Objectives: Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Methods: Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m−3 respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters’ homogeneity (defined as >75% with the same estimate) was examined compared to a dichotomized probability estimate (<5 versus ≥5%; <50 versus ≥50%). Second, for the ordinal probability metric and continuous intensity and frequency metrics, we calculated the intraclass correlation coefficients (ICCs) between each job’s estimate and the mean estimate for all jobs within the cluster. Results: Within-cluster homogeneity increased when more clusters were used. For example, ≥80% of the clusters were homogeneous when 500 clusters were used. Similarly, ICCs were generally above 0.7 when ≥200 clusters were used, indicating minimal within-cluster variability. The most within-cluster variability was observed for the frequency metric (ICCs from 0.4 to 0.8). We estimated that using an expert to assign exposure at the cluster-level assignment and then to review each job in non-homogeneous clusters would require ~2000 decisions per expert, in contrast to evaluating 4255 unique questionnaire patterns or 14983 individual jobs. Conclusions: This proof-of-concept shows that using cluster models as a data reduction step to identify jobs with similar response patterns prior to obtaining expert ratings has the potential to aid rule-based assessment by systematically reducing the number of exposure decisions needed. While promising, additional research is needed to quantify the actual reduction in exposure decisions and the resulting homogeneity of exposure estimates within clusters for an exposure assessment effort that obtains cluster-level expert assessments as part of the assessment process. PMID:25477475
Friesen, Melissa C; Shortreed, Susan M; Wheeler, David C; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S; Baris, Dalsu; Karagas, Margaret R; Schwenn, Molly; Johnson, Alison; Armenti, Karla R; Silverman, Debra T; Yu, Kai
2015-05-01
Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m(-3) respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters' homogeneity (defined as >75% with the same estimate) was examined compared to a dichotomized probability estimate (<5 versus ≥5%; <50 versus ≥50%). Second, for the ordinal probability metric and continuous intensity and frequency metrics, we calculated the intraclass correlation coefficients (ICCs) between each job's estimate and the mean estimate for all jobs within the cluster. Within-cluster homogeneity increased when more clusters were used. For example, ≥80% of the clusters were homogeneous when 500 clusters were used. Similarly, ICCs were generally above 0.7 when ≥200 clusters were used, indicating minimal within-cluster variability. The most within-cluster variability was observed for the frequency metric (ICCs from 0.4 to 0.8). We estimated that using an expert to assign exposure at the cluster-level assignment and then to review each job in non-homogeneous clusters would require ~2000 decisions per expert, in contrast to evaluating 4255 unique questionnaire patterns or 14983 individual jobs. This proof-of-concept shows that using cluster models as a data reduction step to identify jobs with similar response patterns prior to obtaining expert ratings has the potential to aid rule-based assessment by systematically reducing the number of exposure decisions needed. While promising, additional research is needed to quantify the actual reduction in exposure decisions and the resulting homogeneity of exposure estimates within clusters for an exposure assessment effort that obtains cluster-level expert assessments as part of the assessment process. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.
JPL's Real-Time Weather Processor project (RWP) metrics and observations at system completion
NASA Technical Reports Server (NTRS)
Loesh, Robert E.; Conover, Robert A.; Malhotra, Shan
1990-01-01
As an integral part of the overall upgraded National Airspace System (NAS), the objective of the Real-Time Weather Processor (RWP) project is to improve the quality of weather information and the timeliness of its dissemination to system users. To accomplish this, an RWP will be installed in each of the Center Weather Service Units (CWSUs), located in 21 of the 23 Air Route Traffic Control Centers (ARTCCs). The RWP System is a prototype system. It is planned that the software will be GFE and that production hardware will be acquired via industry competitive procurement. The ARTCC is a facility established to provide air traffic control service to aircraft operating on Instrument Flight Rules (IFR) flight plans within controlled airspace, principally during the en route phase of the flight. Covered here are requirement metrics, Software Problem Failure Reports (SPFRs), and Ada portability metrics and observations.
Rule-based modeling and simulations of the inner kinetochore structure.
Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar
2013-09-01
Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.
Metrics, Business Plans, and the Vanishing Public Good
ERIC Educational Resources Information Center
Tuchman, Gaye
2011-01-01
For at least 30 years, professional work has been changing. Even such once-elite professionals as doctors, lawyers, and professors have become subject to significant control. Single-practitioner medical practices have given way to group practices subject to the rules of insurance plans; lawyers join mammoth firms where paralegals time the steps…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-18
... (NOAA), Commerce. ACTION: Proposed rule. SUMMARY: NMFS proposes to implement the annual catch limit (ACL... (CPS) Fishery Management Plan (FMP). The proposed 2013-2014 ACL for Pacific mackerel is 52,358 metric... fishery attains the ACT, the directed fishery will close, reserving the difference between the ACL and ACT...
Gürgen, Fikret; Gürgen, Nurgül
2003-01-01
This study proposes an intelligent data analysis approach to investigate and interpret the distinctive factors of diabetes mellitus patients with and without ischemic (non-embolic type) stroke in a small population. The database consists of a total of 16 features collected from 44 diabetic patients. Features include age, gender, duration of diabetes, cholesterol, high density lipoprotein, triglyceride levels, neuropathy, nephropathy, retinopathy, peripheral vascular disease, myocardial infarction rate, glucose level, medication and blood pressure. Metric and non-metric features are distinguished. First, the mean and covariance of the data are estimated and the correlated components are observed. Second, major components are extracted by principal component analysis. Finally, as common examples of local and global classification approach, a k-nearest neighbor and a high-degree polynomial classifier such as multilayer perceptron are employed for classification with all the components and major components case. Macrovascular changes emerged as the principal distinctive factors of ischemic-stroke in diabetes mellitus. Microvascular changes were generally ineffective discriminators. Recommendations were made according to the rules of evidence-based medicine. Briefly, this case study, based on a small population, supports theories of stroke in diabetes mellitus patients and also concludes that the use of intelligent data analysis improves personalized preventive intervention. PMID:12685939
Kumar, B. Vinodh; Mohan, Thuthi
2018-01-01
OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587
On the local structure of spacetime in ghost-free bimetric theory and massive gravity
NASA Astrophysics Data System (ADS)
Hassan, S. F.; Kocic, Mikica
2018-05-01
The ghost-free bimetric theory describes interactions of gravity with another spin-2 field in terms of two Lorentzian metrics. However, if the two metrics do not admit compatible notions of space and time, the formulation of the initial value problem becomes problematic. Furthermore, the interaction potential is given in terms of the square root of a matrix which is in general nonunique and possibly nonreal. In this paper we show that both these issues are evaded by requiring reality and general covariance of the equations. First we prove that the reality of the square root matrix leads to a classification of the allowed metrics in terms of the intersections of their null cones. Then, the requirement of general covariance further restricts the allowed metrics to geometries that admit compatible notions of space and time. It also selects a unique definition of the square root matrix. The restrictions are compatible with the equations of motion. These results ensure that the ghost-free bimetric theory can be defined unambiguously and that the two metrics always admit compatible 3+1 decompositions, at least locally. In particular, these considerations rule out certain solutions of massive gravity with locally Closed Causal Curves, which have been used to argue that the theory is acausal.
Rolls, Edmund T; Mills, W Patrick C
2018-05-01
When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibanez, E.; Milligan, M.
2014-04-01
Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less
ERIC Educational Resources Information Center
Booton, Carol M.
2013-01-01
Academic quality in for-profit vocational (Gainful Employment) programs is a concern for all stakeholders. However, academic quality is not easily defined. The Department of Education's Gainful Employment Rule defines academic quality With a few easily measured metrics such as student retention and job placement rate, despite the fact that…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... Permanent a Unit-of-Count Metric Alternative for NYSE OpenBook Products May 5, 2010. I. Introduction On... Vendors use NYSE OpenBook data in their display services. In fact, the Exchange believes that proposal could encourage Vendors to create and promote innovative uses of NYSE OpenBook information. For instance...
40 CFR 98.360 - Definition of the source category.
Code of Federal Regulations, 2010 CFR
2010-07-01
...,000 metric tons CO2e or more per year. (1) Table JJ-1 presents the minimum average annual animal... Table JJ-1 do not need to report under this rule. A facility with an annual animal population that exceeds those listed in Table JJ-1 should conduct a more thorough analysis to determine applicability. (2...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-25
...; Adjustment to the Atlantic Herring Management Area 1A Sub- Annual Catch Limit AGENCY: National Marine...: Temporary rule; inseason adjustment. SUMMARY: NMFS adjusts the 2011 Fishing Year sub-annual catch limit for... transfer and sub-ACLs for each management area. The 2011 Domestic Annual Harvest is 91,200 metric tons (mt...
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
Integrated model-based retargeting and optical proximity correction
NASA Astrophysics Data System (ADS)
Agarwal, Kanak B.; Banerjee, Shayak
2011-04-01
Conventional resolution enhancement techniques (RET) are becoming increasingly inadequate at addressing the challenges of subwavelength lithography. In particular, features show high sensitivity to process variation in low-k1 lithography. Process variation aware RETs such as process-window OPC are becoming increasingly important to guarantee high lithographic yield, but such techniques suffer from high runtime impact. An alternative to PWOPC is to perform retargeting, which is a rule-assisted modification of target layout shapes to improve their process window. However, rule-based retargeting is not a scalable technique since rules cannot cover the entire search space of two-dimensional shape configurations, especially with technology scaling. In this paper, we propose to integrate the processes of retargeting and optical proximity correction (OPC). We utilize the normalized image log slope (NILS) metric, which is available at no extra computational cost during OPC. We use NILS to guide dynamic target modification between iterations of OPC. We utilize the NILS tagging capabilities of Calibre TCL scripting to identify fragments with low NILS. We then perform NILS binning to assign different magnitude of retargeting to different NILS bins. NILS is determined both for width, to identify regions of pinching, and space, to locate regions of potential bridging. We develop an integrated flow for 1x metal lines (M1) which exhibits lesser lithographic hotspots compared to a flow with just OPC and no retargeting. We also observe cases where hotspots that existed in the rule-based retargeting flow are fixed using our methodology. We finally also demonstrate that such a retargeting methodology does not significantly alter design properties by electrically simulating a latch layout before and after retargeting. We observe less than 1% impact on latch Clk-Q and D-Q delays post-retargeting, which makes this methodology an attractive one for use in improving shape process windows without perturbing designed values.
Advanced metrology by offline SEM data processing
NASA Astrophysics Data System (ADS)
Lakcher, Amine; Schneider, Loïc.; Le-Gratiet, Bertrand; Ducoté, Julien; Farys, Vincent; Besacier, Maxime
2017-06-01
Today's technology nodes contain more and more complex designs bringing increasing challenges to chip manufacturing process steps. It is necessary to have an efficient metrology to assess process variability of these complex patterns and thus extract relevant data to generate process aware design rules and to improve OPC models. Today process variability is mostly addressed through the analysis of in-line monitoring features which are often designed to support robust measurements and as a consequence are not always very representative of critical design rules. CD-SEM is the main CD metrology technique used in chip manufacturing process but it is challenged when it comes to measure metrics like tip to tip, tip to line, areas or necking in high quantity and with robustness. CD-SEM images contain a lot of information that is not always used in metrology. Suppliers have provided tools that allow engineers to extract the SEM contours of their features and to convert them into a GDS. Contours can be seen as the signature of the shape as it contains all the dimensional data. Thus the methodology is to use the CD-SEM to take high quality images then generate SEM contours and create a data base out of them. Contours are used to feed an offline metrology tool that will process them to extract different metrics. It was shown in two previous papers that it is possible to perform complex measurements on hotspots at different process steps (lithography, etch, copper CMP) by using SEM contours with an in-house offline metrology tool. In the current paper, the methodology presented previously will be expanded to improve its robustness and combined with the use of phylogeny to classify the SEM images according to their geometrical proximities.
NASA Astrophysics Data System (ADS)
Zhang, Lulu; Liu, Jingling; Li, Yi
2015-03-01
The influence of spatial differences, which are caused by different anthropogenic disturbances, and temporal changes, which are caused by natural conditions, on macroinvertebrates with periphyton communities in Baiyangdian Lake was compared. Periphyton and macrobenthos assemblage samples were simultaneously collected on four occasions during 2009 and 2010. Based on the physical and chemical attributes in the water and sediment, the 8 sampling sites can be divided into 5 habitat types by using cluster analysis. According to coefficients variation analysis (CV), three primary conclusions can be drawn: (1) the metrics of Hilsenhoff Biotic Index (HBI), Percent Tolerant Taxa (PTT), Percent dominant taxon (PDT), and community loss index (CLI), based on macroinvertebrates, and the metrics of algal density (AD), the proportion of chlorophyta (CHL), and the proportion of cyanophyta (CYA), based on periphytons, were mostly constant throughout our study; (2) in terms of spatial variation, the CV values in the macroinvertebratebased metrics were lower than the CV values in the periphyton-based metrics, and these findings may be caused by the effects of changes in environmental factors; whereas, the CV values in the macroinvertebrate-based metrics were higher than those in the periphyton-based metrics, and these results may be linked to the influences of phenology and life history patterns of the macroinvertebrate individuals; and (3) the CV values for the functionalbased metrics were higher than those for the structuralbased metrics. Therefore, spatial and temporal variation for metrics should be considered when assessing applying the biometrics.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
A guide to calculating habitat-quality metrics to inform conservation of highly mobile species
Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.
2018-01-01
Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.
Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng
2013-01-01
Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984
Impact of OSHA Final Rule—Recording Hearing Loss: An Analysis of an Industrial Audiometric Dataset
Rabinowitz, Peter M.; Slade, Martin; Dixon-Ernst, Christine; Sircar, Kanta; Cullen, Mark
2013-01-01
The 2003 Occupational Safety and Health Administration (OSHA) Occupational Injury and Illness Recording and Reporting Final Rule changed the definition of recordable work-related hearing loss. We performed a study of the Alcoa Inc. audiometric database to evaluate the impact of this new rule. The 2003 rule increased the rate of potentially recordable hearing loss events from 0.2% to 1.6% per year. A total of 68.6% of potentially recordable cases had American Academy of Audiology/American Medical Association (AAO/AMA) hearing impairment at the time of recordability. On average, recordable loss occurred after onset of impairment, whereas the non-age-corrected 10-dB standard threshold shift (STS) usually preceded impairment. The OSHA Final Rule will significantly increase recordable cases of occupational hearing loss. The new case definition is usually accompanied by AAO/AMA hearing impairment. Other, more sensitive metrics should therefore be used for early detection and prevention of hearing loss. PMID:14665813
R4 terms in supergravities via T -duality constraint
NASA Astrophysics Data System (ADS)
Razaghian, Hamid; Garousi, Mohammad R.
2018-05-01
It has been speculated in the literature that the effective actions of string theories at any order of α' should be invariant under the Buscher rules plus their higher covariant-derivative corrections. This may be used as a constraint to find effective actions at any order of α', in particular, the metric, the B -field, and the dilaton couplings in supergravities at order α'3 up to an overall factor. For the simple case of zero B -field and diagonal metric in which we have done the calculations explicitly, we have found that the constraint fixes almost all of the seven independent Riemann curvature couplings. There is only one term which is not fixed, because when metric is diagonal, the reduction of two R4 terms becomes identical. The Riemann curvature couplings that the T -duality constraint produces for both type II and heterotic theories are fully consistent with the existing couplings in the literature which have been found by the S-matrix and by the sigma-model approaches.
2012-01-01
PROJECT NUMBER BYU1 5e. TASK NUMBER MA 5f. WORK UNIT NUMBER RY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Maryland Office of...Research Administration & Advancement College Park MD 20742-5100 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 9. SPONSORING...Armed with these metrics, the Undns ruleset is better revised, vestigial rules removed or demoted for maintenance, and redundant locations distinguished
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-01
... Permanent a Unit-of- Count Metric Alternative for NYSE OpenBook March 25, 2010. Pursuant to Section 19(b)(1... its NYSE OpenBook product packages. Under the Pilot Program, the Exchange no longer defines the Vendor.... The Exchange recognizes that each Vendor and Subscriber will use NYSE OpenBook data differently and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
.... This model will enable the Sub-Adviser to evaluate, rank, and select the appropriate mix of investments... becoming, risky. The Sub-Adviser will use a quantitative metric to rank and select the appropriate mix of... imposes a duty of due diligence on its Equity Trading Permit Holders to learn the essential facts relating...
Contact patterning strategies for 32nm and 28nm technology
NASA Astrophysics Data System (ADS)
Morgenfeld, Bradley; Stobert, Ian; An, Ju j.; Kanai, Hideki; Chen, Norman; Aminpur, Massud; Brodsky, Colin; Thomas, Alan
2011-04-01
As 193 nm immersion lithography is extended indefinitely to sustain technology roadmaps, there is increasing pressure to contain escalating lithography costs by identifying patterning solutions that can minimize the use of multiple-pass processes. Contact patterning for the 32/28 nm technology nodes has been greatly facilitated by just-in-time introduction of new process enablers that allow the simultaneous support of flexible foundry-oriented ground rules alongside highperformance technology, while also migrating to a single-pass patterning process. The incorporation of device based performance metrics along with rigorous patterning and structural variability studies were critical in the evaluation of material innovation for improved resolution and CD shrink along with novel data preparation flows utilizing aggressive strategies for SRAF insertion and retargeting.
Boninger, Joseph W.; Gans, Bruce M.; Chan, Leighton
2012-01-01
The objective was to review pertinent areas of the Patient Protection and Affordable Care Act (PPACA) to determine the PPACA’s impact on physical medicine and rehabilitation (PM&R). The law, and related newspaper and magazine articles, was reviewed. The ways in which provisions in the PPACA are being implemented by the Centers for Medicare and Medicaid Services and other government organizations were investigated. Additionally, recent court rulings on the PPACA were analyzed to assess the law’s chances of successful implementation. The PPACA contains a variety of reforms that, if implemented, will significantly impact the field of PM&R. Many PPACA reforms change how rehabilitative care is delivered by integrating different levels of care and creating uniform quality metrics to assess quality and efficiency. These quality metrics will ultimately be tied to new, performance-based payment systems. While the law contains ambitious initiatives that may, if unsuccessful or incorrectly implemented, negatively impact PM&R, it also has the potential to greatly improve the quality and efficiency of rehabilitative care. A proactive approach to the changes the PPACA will bring about is essential for the health of the field. PMID:22459177
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Meystre, Stéphane M; Thibault, Julien; Shen, Shuying; Hurdle, John F; South, Brett R
2010-01-01
OBJECTIVE To describe a new medication information extraction system-Textractor-developed for the 'i2b2 medication extraction challenge'. The development, functionalities, and official evaluation of the system are detailed. Textractor is based on the Apache Unstructured Information Management Architecture (UMIA) framework, and uses methods that are a hybrid between machine learning and pattern matching. Two modules in the system are based on machine learning algorithms, while other modules use regular expressions, rules, and dictionaries, and one module embeds MetaMap Transfer. The official evaluation was based on a reference standard of 251 discharge summaries annotated by all teams participating in the challenge. The metrics used were recall, precision, and the F(1)-measure. They were calculated with exact and inexact matches, and were averaged at the level of systems and documents. The reference metric for this challenge, the system-level overall F(1)-measure, reached about 77% for exact matches, with a recall of 72% and a precision of 83%. Performance was the best with route information (F(1)-measure about 86%), and was good for dosage and frequency information, with F(1)-measures of about 82-85%. Results were not as good for durations, with F(1)-measures of 36-39%, and for reasons, with F(1)-measures of 24-27%. The official evaluation of Textractor for the i2b2 medication extraction challenge demonstrated satisfactory performance. This system was among the 10 best performing systems in this challenge.
McAdams, Harley; AlQuraishi, Mohammed
2015-04-21
Techniques for determining values for a metric of microscale interactions include determining a mesoscale metric for a plurality of mesoscale interaction types, wherein a value of the mesoscale metric for each mesoscale interaction type is based on a corresponding function of values of the microscale metric for the plurality of the microscale interaction types. A plurality of observations that indicate the values of the mesoscale metric are determined for the plurality of mesoscale interaction types. Values of the microscale metric are determined for the plurality of microscale interaction types based on the plurality of observations and the corresponding functions and compressed sensing.
Neural decoding with kernel-based metric learning.
Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C
2014-06-01
In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.
Development of an evolutionary fuzzy expert system for estimating future behavior of stock price
NASA Astrophysics Data System (ADS)
Mehmanpazir, Farhad; Asadi, Shahrokh
2017-03-01
The stock market has always been an attractive area for researchers since no method has been found yet to predict the stock price behavior precisely. Due to its high rate of uncertainty and volatility, it carries a higher risk than any other investment area, thus the stock price behavior is difficult to simulation. This paper presents a "data mining-based evolutionary fuzzy expert system" (DEFES) approach to estimate the behavior of stock price. This tool is developed in seven-stage architecture. Data mining is used in three stages to reduce the complexity of the whole data space. The first stage, noise filtering, is used to make our raw data clean and smooth. Variable selection is second stage; we use stepwise regression analysis to choose the key variables been considered in the model. In the third stage, K-means is used to divide the data into sub-populations to decrease the effects of noise and rebate complexity of the patterns. At next stage, extraction of Mamdani type fuzzy rule-based system will be carried out for each cluster by means of genetic algorithm and evolutionary strategy. In the fifth stage, we use binary genetic algorithm to rule filtering to remove the redundant rules in order to solve over learning phenomenon. In the sixth stage, we utilize the genetic tuning process to slightly adjust the shape of the membership functions. Last stage is the testing performance of tool and adjusts parameters. This is the first study on using an approximate fuzzy rule base system and evolutionary strategy with the ability of extracting the whole knowledge base of fuzzy expert system for stock price forecasting problems. The superiority and applicability of DEFES are shown for International Business Machines Corporation and compared the outcome with the results of the other methods. Results with MAPE metric and Wilcoxon signed ranks test indicate that DEFES provides more accuracy and outperforms all previous methods, so it can be considered as a superior tool for stock price forecasting problems.
NASA Technical Reports Server (NTRS)
Kramer, Arthur F.; Sirevaag, Erik J.; Braune, Rolf
1986-01-01
This study explores the relationship between the P300 component of the event-related brain potential (ERP) and the processing demands of a complex real-world task. Seven male volunteers enrolled in an Instrument Flight Rule (IFR) aviation course flew a series of missions in a single engine fixed-based simulator. In dual task conditions subjects were also required to discriminate between two tones differing in frequency. ERPs time-locked to the tones, subjective effort ratings and overt performance measures were collected during two 45 min flights differing in difficulty (manipulated by varying both atmospheric conditions and instrument reliability). The more difficult flight was associated with poorer performance, increased subjective effort ratings, and smaller secondary task P300s. Within each flight, P300 amplitude was negatively correlated with deviations from command headings indicating that P300 amplitude was a sensitive workload metric both between and within the flight missions.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-15
... cattle, lean hogs, feeder cattle), softs (e.g., sugar, cotton, coffee, cocoa) and energy (e.g., crude oil........ none none 8:25 a.m.-1:25 p.m. Cocoa Cocoa ICE-US 10 metric tons.... none none 4 a.m.-2 p.m. Coffee... derivative products, including Trust Issued Receipts, to monitor trading in the Units. The Exchange...
Texture metric that predicts target detection performance
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.
2015-12-01
Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.
Foresters' Metric Conversions program (version 1.0). [Computer program
Jefferson A. Palmer
1999-01-01
The conversion of scientific measurements has become commonplace in the fields of - engineering, research, and forestry. Foresters? Metric Conversions is a Windows-based computer program that quickly converts user-defined measurements from English to metric and from metric to English. Foresters? Metric Conversions was derived from the publication "Metric...
Abar, Orhan; Charnigo, Richard J.; Rayapati, Abner
2017-01-01
Association rule mining has received significant attention from both the data mining and machine learning communities. While data mining researchers focus more on designing efficient algorithms to mine rules from large datasets, the learning community has explored applications of rule mining to classification. A major problem with rule mining algorithms is the explosion of rules even for moderate sized datasets making it very difficult for end users to identify both statistically significant and potentially novel rules that could lead to interesting new insights and hypotheses. Researchers have proposed many domain independent interestingness measures using which, one can rank the rules and potentially glean useful rules from the top ranked ones. However, these measures have not been fully explored for rule mining in clinical datasets owing to the relatively large sizes of the datasets often encountered in healthcare and also due to limited access to domain experts for review/analysis. In this paper, using an electronic medical record (EMR) dataset of diagnoses and medications from over three million patient visits to the University of Kentucky medical center and affiliated clinics, we conduct a thorough evaluation of dozens of interestingness measures proposed in data mining literature, including some new composite measures. Using cumulative relevance metrics from information retrieval, we compare these interestingness measures against human judgments obtained from a practicing psychiatrist for association rules involving the depressive disorders class as the consequent. Our results not only surface new interesting associations for depressive disorders but also indicate classes of interestingness measures that weight rule novelty and statistical strength in contrasting ways, offering new insights for end users in identifying interesting rules. PMID:28736771
NASA Astrophysics Data System (ADS)
Trimborn, Barbara; Wolf, Ivo; Abu-Sammour, Denis; Henzler, Thomas; Schad, Lothar R.; Zöllner, Frank G.
2017-03-01
Image registration of preprocedural contrast-enhanced CTs to intraprocedual cone-beam computed tomography (CBCT) can provide additional information for interventional liver oncology procedures such as transcatheter arterial chemoembolisation (TACE). In this paper, a novel similarity metric for gradient-based image registration is proposed. The metric relies on the patch-based computation of histograms of oriented gradients (HOG) building the basis for a feature descriptor. The metric was implemented in a framework for rigid 3D-3D-registration of pre-interventional CT with intra-interventional CBCT data obtained during the workflow of a TACE. To evaluate the performance of the new metric, the capture range was estimated based on the calculation of the mean target registration error and compared to the results obtained with a normalized cross correlation metric. The results show that 3D HOG feature descriptors are suitable as image-similarity metric and that the novel metric can compete with established methods in terms of registration accuracy
Konias, Sokratis; Chouvarda, Ioanna; Vlahavas, Ioannis; Maglaveras, Nicos
2005-09-01
Current approaches for mining association rules usually assume that the mining is performed in a static database, where the problem of missing attribute values does not practically exist. However, these assumptions are not preserved in some medical databases, like in a home care system. In this paper, a novel uncertainty rule algorithm is illustrated, namely URG-2 (Uncertainty Rule Generator), which addresses the problem of mining dynamic databases containing missing values. This algorithm requires only one pass from the initial dataset in order to generate the item set, while new metrics corresponding to the notion of Support and Confidence are used. URG-2 was evaluated over two medical databases, introducing randomly multiple missing values for each record's attribute (rate: 5-20% by 5% increments) in the initial dataset. Compared with the classical approach (records with missing values are ignored), the proposed algorithm was more robust in mining rules from datasets containing missing values. In all cases, the difference in preserving the initial rules ranged between 30% and 60% in favour of URG-2. Moreover, due to its incremental nature, URG-2 saved over 90% of the time required for thorough re-mining. Thus, the proposed algorithm can offer a preferable solution for mining in dynamic relational databases.
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation
Peter, Adrian M.; Rangarajan, Anand
2010-01-01
Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497
Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan
2016-01-01
Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.
Kallen, Michael A; Cook, Karon F; Amtmann, Dagmar; Knowlton, Elizabeth; Gershon, Richard C
2018-05-05
To evaluate the degree to which applying alternative stopping rules would reduce response burden while maintaining score precision in the context of computer adaptive testing (CAT). Analyses were conducted on secondary data comprised of CATs administered in a clinical setting at multiple time points (baseline and up to two follow ups) to 417 study participants who had back pain (51.3%) and/or depression (47.0%). Participant mean age was 51.3 years (SD = 17.2) and ranged from 18 to 86. Participants tended to be white (84.7%), relatively well educated (77% with at least some college), female (63.9%), and married or living in a committed relationship (57.4%). The unit of analysis was individual assessment histories (i.e., CAT item response histories) from the parent study. Data were first aggregated across all individuals, domains, and time points in an omnibus dataset of assessment histories and then were disaggregated by measure for domain-specific analyses. Finally, assessment histories within a "clinically relevant range" (score ≥ 1 SD from the mean in direction of poorer health) were analyzed separately to explore score level-specific findings. Two different sets of CAT administration rules were compared. The original CAT (CAT ORIG ) rules required at least four and no more than 12 items be administered. If the score standard error (SE) reached a value < 3 points (T score metric) before 12 items were administered, the CAT was stopped. We simulated applying alternative stopping rules (CAT ALT ), removing the requirement that a minimum four items be administered, and stopped a CAT if responses to the first two items were both associated with best health, if the SE was < 3, if SE change < 0.1 (T score metric), or if 12 items were administered. We then compared score fidelity and response burden, defined as number of items administered, between CAT ORIG and CAT ALT . CAT ORIG and CAT ALT scores varied little, especially within the clinically relevant range, and response burden was substantially lower under CAT ALT (e.g., 41.2% savings in omnibus dataset). Alternate stopping rules result in substantial reductions in response burden with minimal sacrifice in score precision.
NASA Astrophysics Data System (ADS)
Kastor, David; Ray, Sourya; Traschen, Jennie
2017-10-01
We study the problem of finding brane-like solutions to Lovelock gravity, adopting a general approach to establish conditions that a lower dimensional base metric must satisfy in order that a solution to a given Lovelock theory can be constructed in one higher dimension. We find that for Lovelock theories with generic values of the coupling constants, the Lovelock tensors (higher curvature generalizations of the Einstein tensor) of the base metric must all be proportional to the metric. Hence, allowed base metrics form a subclass of Einstein metrics. This subclass includes so-called ‘universal metrics’, which have been previously investigated as solutions to quantum-corrected field equations. For specially tuned values of the Lovelock couplings, we find that the Lovelock tensors of the base metric need to satisfy fewer constraints. For example, for Lovelock theories with a unique vacuum there is only a single such constraint, a case previously identified in the literature, and brane solutions can be straightforwardly constructed.
Narayan, Anand; Cinelli, Christina; Carrino, John A; Nagy, Paul; Coresh, Josef; Riese, Victoria G; Durand, Daniel J
2015-11-01
As the US health care system transitions toward value-based reimbursement, there is an increasing need for metrics to quantify health care quality. Within radiology, many quality metrics are in use, and still more have been proposed, but there have been limited attempts to systematically inventory these measures and classify them using a standard framework. The purpose of this study was to develop an exhaustive inventory of public and private sector imaging quality metrics classified according to the classic Donabedian framework (structure, process, and outcome). A systematic review was performed in which eligibility criteria included published articles (from 2000 onward) from multiple databases. Studies were double-read, with discrepancies resolved by consensus. For the radiology benefit management group (RBM) survey, the six known companies nationally were surveyed. Outcome measures were organized on the basis of standard categories (structure, process, and outcome) and reported using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy yielded 1,816 citations; review yielded 110 reports (29 included for final analysis). Three of six RBMs (50%) responded to the survey; the websites of the other RBMs were searched for additional metrics. Seventy-five unique metrics were reported: 35 structure (46%), 20 outcome (27%), and 20 process (27%) metrics. For RBMs, 35 metrics were reported: 27 structure (77%), 4 process (11%), and 4 outcome (11%) metrics. The most commonly cited structure, process, and outcome metrics included ACR accreditation (37%), ACR Appropriateness Criteria (85%), and peer review (95%), respectively. Imaging quality metrics are more likely to be structural (46%) than process (27%) or outcome (27%) based (P < .05). As national value-based reimbursement programs increasingly emphasize outcome-based metrics, radiologists must keep pace by developing the data infrastructure required to collect outcome-based quality metrics. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Friesen, Melissa C.
2013-01-01
Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the algorithm and the individual raters (κw = 0.58–0.81). For these metrics, the algorithm estimates had consistently higher agreement with the aggregate rating (κw = 0.82) than with the individual raters. For all metrics, the agreement between the algorithm and the aggregate ratings was highest for the unexposed category (90–93%) and was poor to moderate for the exposed categories (9–64%). Lower agreement was observed for jobs with a start year <1965 versus ≥1965. For the confidence metrics, the agreement was poor to moderate among raters (κw = 0.17–0.45) and between the algorithm and the individual raters (κw = 0.24–0.61). CART models identified patterns in the questionnaire responses that predicted a fair-to-moderate (33–89%) proportion of the disagreements between the raters’ and the algorithm estimates. Discussion: The agreement between any two raters was similar to the agreement between an algorithm-based approach and individual raters, providing additional support for using the more efficient and transparent algorithm-based approach. CART models identified some patterns in disagreements between the first rater and the algorithm. Given the absence of a gold standard for estimating exposure, these patterns can be reviewed by a team of exposure assessors to determine whether the algorithm should be revised for future studies. PMID:23184256
Pitch structure, but not selective attention, affects accent weightings in metrical grouping.
Prince, Jon B
2014-10-01
Among other cues, pitch and temporal accents contribute to grouping in musical sequences. However, exactly how they combine remains unclear, possibly because of the role of structural organization. In 3 experiments, participants rated the perceived metrical grouping of sequences that either adhered to the rules of tonal Western musical pitch structure (musical key) or did not (atonal). The tonal status of sequences did not provide any grouping cues and was irrelevant to the task. Experiment 1 established equally strong levels of pitch leap accents and duration accents in baseline conditions, which were then recombined in subsequent experiments. Neither accent type was stronger or weaker for tonal and atonal contexts. In Experiment 2, pitch leap accents dominated over duration accents, but the extent of this advantage was greater when sequences were tonal. Experiment 3 ruled out an attentional origin of this effect by replicating this finding while explicitly manipulating attention to pitch or duration accents between participant groups. Overall, the presence of tonal pitch structure made the dimension of pitch more salient at the expense of time. These findings support a dimensional salience framework in which the presence of organizational structure prioritizes the processing of the more structured dimension regardless of task relevance, independent from psychophysical difficulty, and impervious to attentional allocation.
Performance metrics for the evaluation of hyperspectral chemical identification systems
NASA Astrophysics Data System (ADS)
Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay
2016-02-01
Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Götstedt, Julia; Karlsson Hauer, Anna; Bäck, Anna, E-mail: anna.back@vgregion.se
Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metricmore » is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge area metric, converted aperture metric, circumference/area ratio, edge metric, and MU/Gy ratio show good linear correlation to the complexity of the MLC openings, expressed as the 5% dose difference pass rate, with Pearson’s r-values of −0.94, −0.88, −0.84, −0.89, and −0.82, respectively. The overall trends for the 3% and 5% dose difference evaluations are similar. Conclusions: New complexity metrics are developed. The calculated scores correlate to the complexity of the created static MLC openings. The complexity of the MLC opening is dependent on the penumbra region relative to the area of the opening. The aperture-based complexity metrics that combined either the distances between the MLC leaves or the MLC opening circumference with the aperture area show the best correlation with the complexity of the static MLC openings.« less
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.
Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, N. C.; Taylor, P. C.
2014-12-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.
Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko
2017-07-01
Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.
Adaptive spectral filtering of PIV cross correlations
NASA Astrophysics Data System (ADS)
Giarra, Matthew; Vlachos, Pavlos; Aether Lab Team
2016-11-01
Using cross correlations (CCs) in particle image velocimetry (PIV) assumes that tracer particles in interrogation regions (IRs) move with the same velocity. But this assumption is nearly always violated because real flows exhibit velocity gradients, which degrade the signal-to-noise ratio (SNR) of the CC and are a major driver of error in PIV. Iterative methods help reduce these errors, but even they can fail when gradients are large within individual IRs. We present an algorithm to mitigate the effects of velocity gradients on PIV measurements. Our algorithm is based on a model of the CC, which predicts a relationship between the PDF of particle displacements and the variation of the correlation's SNR across the Fourier spectrum. We give an algorithm to measure this SNR from the CC, and use this insight to create a filter that suppresses the low-SNR portions of the spectrum. Our algorithm extends to the ensemble correlation, where it accelerates the convergence of the measurement and also reveals the PDF of displacements of the ensemble (and therefore of statistical metrics like diffusion coefficient). Finally, our model provides theoretical foundations for a number of "rules of thumb" in PIV, like the quarter-window rule.
Evaluation of an Integrated Framework for Biodiversity with a New Metric for Functional Dispersion
Presley, Steven J.; Scheiner, Samuel M.; Willig, Michael R.
2014-01-01
Growing interest in understanding ecological patterns from phylogenetic and functional perspectives has driven the development of metrics that capture variation in evolutionary histories or ecological functions of species. Recently, an integrated framework based on Hill numbers was developed that measures three dimensions of biodiversity based on abundance, phylogeny and function of species. This framework is highly flexible, allowing comparison of those diversity dimensions, including different aspects of a single dimension and their integration into a single measure. The behavior of those metrics with regard to variation in data structure has not been explored in detail, yet is critical for ensuring an appropriate match between the concept and its measurement. We evaluated how each metric responds to particular data structures and developed a new metric for functional biodiversity. The phylogenetic metric is sensitive to variation in the topology of phylogenetic trees, including variation in the relative lengths of basal, internal and terminal branches. In contrast, the functional metric exhibited multiple shortcomings: (1) species that are functionally redundant contribute nothing to functional diversity and (2) a single highly distinct species causes functional diversity to approach the minimum possible value. We introduced an alternative, improved metric based on functional dispersion that solves both of these problems. In addition, the new metric exhibited more desirable behavior when based on multiple traits. PMID:25148103
Changing to the Metric System.
ERIC Educational Resources Information Center
Chambers, Donald L.; Dowling, Kenneth W.
This report examines educational aspects of the conversion to the metric system of measurement in the United States. Statements of positions on metrication and basic mathematical skills are given from various groups. Base units, symbols, prefixes, and style of the metric system are outlined. Guidelines for teaching metric concepts are given,…
Designing a Robust Micromixer Based on Fluid Stretching
NASA Astrophysics Data System (ADS)
Mott, David; Gautam, Dipesh; Voth, Greg; Oran, Elaine
2010-11-01
A metric for measuring fluid stretching based on finite-time Lyapunov exponents is described, and the use of this metric for optimizing mixing in microfluidic components is explored. The metric is implemented within an automated design approach called the Computational Toolbox (CTB). The CTB designs components by adding geometric features, such a grooves of various shapes, to a microchannel. The transport produced by each of these features in isolation was pre-computed and stored as an "advection map" for that feature, and the flow through a composite geometry that combines these features is calculated rapidly by applying the corresponding maps in sequence. A genetic algorithm search then chooses the feature combination that optimizes a user-specified metric. Metrics based on the variance of concentration generally require the user to specify the fluid distributions at inflow, which leads to different mixer designs for different inflow arrangements. The stretching metric is independent of the fluid arrangement at inflow. Mixers designed using the stretching metric are compared to those designed using a variance of concentration metric and show excellent performance across a variety of inflow distributions and diffusivities.
Developing a Security Metrics Scorecard for Healthcare Organizations.
Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea
2015-01-01
In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.
Energy-Based Metrics for Arthroscopic Skills Assessment.
Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa
2017-08-05
Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.
Department of Defense Software Factbook
2017-07-07
parameters, these rules of thumb may not provide a lot of value to project managers estimating their software efforts. To get the information useful to them...organization determine the total cost of a particular project , but it is a useful metric to technical managers when they are required to submit an annual...outcome. It is most likely a combination of engineering, management , and funding factors. Although a project may resist planning a schedule slip, this
Semantic Metrics for Analysis of Software
NASA Technical Reports Server (NTRS)
Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara
2005-01-01
A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Launch Vehicle Production and Operations Cost Metrics
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Neeley, James R.; Blackburn, Ruby F.
2014-01-01
Traditionally, launch vehicle cost has been evaluated based on $/Kg to orbit. This metric is calculated based on assumptions not typically met by a specific mission. These assumptions include the specified orbit whether Low Earth Orbit (LEO), Geostationary Earth Orbit (GEO), or both. The metric also assumes the payload utilizes the full lift mass of the launch vehicle, which is rarely true even with secondary payloads.1,2,3 Other approaches for cost metrics have been evaluated including unit cost of the launch vehicle and an approach to consider the full program production and operations costs.4 Unit cost considers the variable cost of the vehicle and the definition of variable costs are discussed. The full program production and operation costs include both the variable costs and the manufacturing base. This metric also distinguishes operations costs from production costs, including pre-flight operational testing. Operations costs also consider the costs of flight operations, including control center operation and maintenance. Each of these 3 cost metrics show different sensitivities to various aspects of launch vehicle cost drivers. The comparison of these metrics provides the strengths and weaknesses of each yielding an assessment useful for cost metric selection for launch vehicle programs.
Way-finding in displaced clock-shifted bees proves bees use a cognitive map.
Cheeseman, James F; Millar, Craig D; Greggers, Uwe; Lehmann, Konstantin; Pawley, Matthew D M; Gallistel, Charles R; Warman, Guy R; Menzel, Randolf
2014-06-17
Mammals navigate by means of a metric cognitive map. Insects, most notably bees and ants, are also impressive navigators. The question whether they, too, have a metric cognitive map is important to cognitive science and neuroscience. Experimentally captured and displaced bees often depart from the release site in the compass direction they were bent on before their capture, even though this no longer heads them toward their goal. When they discover their error, however, the bees set off more or less directly toward their goal. This ability to orient toward a goal from an arbitrary point in the familiar environment is evidence that they have an integrated metric map of the experienced environment. We report a test of an alternative hypothesis, which is that all the bees have in memory is a collection of snapshots that enable them to recognize different landmarks and, associated with each such snapshot, a sun-compass-referenced home vector derived from dead reckoning done before and after previous visits to the landmark. We show that a large shift in the sun-compass rapidly induced by general anesthesia does not alter the accuracy or speed of the homeward-oriented flight made after the bees discover the error in their initial postrelease flight. This result rules out the sun-referenced home-vector hypothesis, further strengthening the now extensive evidence for a metric cognitive map in bees.
Way-finding in displaced clock-shifted bees proves bees use a cognitive map
Cheeseman, James F.; Millar, Craig D.; Greggers, Uwe; Lehmann, Konstantin; Pawley, Matthew D. M.; Gallistel, Charles R.; Warman, Guy R.; Menzel, Randolf
2014-01-01
Mammals navigate by means of a metric cognitive map. Insects, most notably bees and ants, are also impressive navigators. The question whether they, too, have a metric cognitive map is important to cognitive science and neuroscience. Experimentally captured and displaced bees often depart from the release site in the compass direction they were bent on before their capture, even though this no longer heads them toward their goal. When they discover their error, however, the bees set off more or less directly toward their goal. This ability to orient toward a goal from an arbitrary point in the familiar environment is evidence that they have an integrated metric map of the experienced environment. We report a test of an alternative hypothesis, which is that all the bees have in memory is a collection of snapshots that enable them to recognize different landmarks and, associated with each such snapshot, a sun-compass–referenced home vector derived from dead reckoning done before and after previous visits to the landmark. We show that a large shift in the sun-compass rapidly induced by general anesthesia does not alter the accuracy or speed of the homeward-oriented flight made after the bees discover the error in their initial postrelease flight. This result rules out the sun-referenced home-vector hypothesis, further strengthening the now extensive evidence for a metric cognitive map in bees. PMID:24889633
Evaluative Usage-Based Metrics for the Selection of E-Journals.
ERIC Educational Resources Information Center
Hahn, Karla L.; Faulkner, Lila A.
2002-01-01
Explores electronic journal usage statistics and develops three metrics and three benchmarks based on those metrics. Topics include earlier work that assessed the value of print journals and was modified for the electronic format; the evaluation of potential purchases; and implications for standards development, including the need for content…
Podiform chromite deposits--database and grade and tonnage models
Mosier, Dan L.; Singer, Donald A.; Moring, Barry C.; Galloway, John P.
2012-01-01
Chromite ((Mg, Fe++)(Cr, Al, Fe+++)2O4) is the only source for the metallic element chromium, which is used in the metallurgical, chemical, and refractory industries. Podiform chromite deposits are small magmatic chromite bodies formed in the ultramafic section of an ophiolite complex in the oceanic crust. These deposits have been found in midoceanic ridge, off-ridge, and suprasubduction tectonic settings. Most podiform chromite deposits are found in dunite or peridotite near the contact of the cumulate and tectonite zones in ophiolites. We have identified 1,124 individual podiform chromite deposits, based on a 100-meter spatial rule, and have compiled them in a database. Of these, 619 deposits have been used to create three new grade and tonnage models for podiform chromite deposits. The major podiform chromite model has a median tonnage of 11,000 metric tons and a mean grade of 45 percent Cr2O3. The minor podiform chromite model has a median tonnage of 100 metric tons and a mean grade of 43 percent Cr2O3. The banded podiform chromite model has a median tonnage of 650 metric tons and a mean grade of 42 percent Cr2O3. Observed frequency distributions are also given for grades of rhodium, iridium, ruthenium, palladium, and platinum. In resource assessment applications, both major and minor podiform chromite models may be used for any ophiolite complex regardless of its tectonic setting or ophiolite zone. Expected sizes of undiscovered podiform chromite deposits, with respect to degree of deformation or ore-forming process, may determine which model is appropriate. The banded podiform chromite model may be applicable for ophiolites in both suprasubduction and midoceanic ridge settings.
NASA Astrophysics Data System (ADS)
Schunert, Sebastian
In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy/efficiency, resilience against negative fluxes, and possession of the thick diffusion limit, separately. The choice of the most efficient method depends on the utilized error norm: in Lp error norms higher order methods such as the AHOTN method of order three perform best, while for computing integral quantities the linear nodal (LN) method is most efficient. The most resilient method against occurrence of negative fluxes is the simple corner balance (SCB) method. A validation of the quantitative decision metric is performed based on the NEA box-inbox suite of test problems. The validation exercise comprises two stages: first prediction of the contending methods' performance via the decision metric and second computing the actual scores based on data obtained from the NEA benchmark problem. The comparison of predicted and actual scores via a penalty function (ratio of predicted best performer's score to actual best score) completes the validation exercise. It is found that the decision metric is capable of very accurate predictions (penalty < 10%) in more than 83% of the considered cases and features penalties up to 20% for the remaining cases. An exception to this rule is the third test case NEA-III intentionally set up to incorporate a poor match of the benchmark with the "data" problems. However, even under these worst case conditions the decision metric's suggestions are never detrimental. Suggestions for improving the decision metric's accuracy are to increase the pool of employed data, to refine the mapping of a given configuration to a case in the database, and to better characterize the desired target quantities.
Using Publication Metrics to Highlight Academic Productivity and Research Impact
Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.
2016-01-01
This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141
The Strategies to Homogenize PET/CT Metrics: The Case of Onco-Haematological Clinical Trials
Chauvie, Stephane; Bergesio, Fabrizio
2016-01-01
Positron emission tomography (PET) has been a widely used tool in oncology for staging lymphomas for a long time. Recently, several large clinical trials demonstrated its utility in therapy management during treatment, paving the way to personalized medicine. In doing so, the traditional way of reporting PET based on the extent of disease has been complemented by a discrete scale that takes in account tumour metabolism. However, due to several technical, physical and biological limitations in the use of PET uptake as a biomarker, stringent rules have been used in clinical trials to reduce the errors in its evaluation. Within this manuscript we will describe shortly the evolution in PET reporting, examine the main errors in uptake measurement, and analyse which strategy the clinical trials applied to reduce them. PMID:28536393
An, Ming-Wen; Mandrekar, Sumithra J; Branda, Megan E; Hillman, Shauna L; Adjei, Alex A; Pitot, Henry C; Goldberg, Richard M; Sargent, Daniel J
2011-10-15
The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Individual patient data from three North Central Cancer Treatment Group trials (N0026, n = 117; N9741, n = 1,109; and N9841, n = 332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response [TriTR: response (complete or partial) vs. stable disease vs. progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12, 16, and 24 weeks postbaseline. Model discrimination was evaluated by the concordance (c) index. The overall best response rates for the three trials were 26%, 45%, and 25%, respectively. Although nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the concordance indices (c-index) for the traditional response metrics ranged from 0.59 to 0.65; for the continuous metrics from 0.60 to 0.66; and for the TriTR metrics from 0.64 to 0.69. The c-indices for TriTR at 12 weeks were comparable with those at 16 and 24 weeks. Continuous tumor measurement-based metrics provided no predictive improvement over traditional response-based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future phase II trials. ©2011 AACR.
An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.
2011-01-01
Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
Poodat, Fatemeh; Arrowsmith, Colin; Fraser, David; Gordon, Ascelin
2015-09-01
Connectivity among fragmented areas of habitat has long been acknowledged as important for the viability of biological conservation, especially within highly modified landscapes. Identifying important habitat patches in ecological connectivity is a priority for many conservation strategies, and the application of 'graph theory' has been shown to provide useful information on connectivity. Despite the large number of metrics for connectivity derived from graph theory, only a small number have been compared in terms of the importance they assign to nodes in a network. This paper presents a study that aims to define a new set of metrics and compares these with traditional graph-based metrics, used in the prioritization of habitat patches for ecological connectivity. The metrics measured consist of "topological" metrics, "ecological metrics," and "integrated metrics," Integrated metrics are a combination of topological and ecological metrics. Eight metrics were applied to the habitat network for the fat-tailed dunnart within Greater Melbourne, Australia. A non-directional network was developed in which nodes were linked to adjacent nodes. These links were then weighted by the effective distance between patches. By applying each of the eight metrics for the study network, nodes were ranked according to their contribution to the overall network connectivity. The structured comparison revealed the similarity and differences in the way the habitat for the fat-tailed dunnart was ranked based on different classes of metrics. Due to the differences in the way the metrics operate, a suitable metric should be chosen that best meets the objectives established by the decision maker.
2009-08-01
integration across base MSF Category: Neighbors and Stakeholders (NS) No. Conceptual Metric No. Conceptual Metric NS1 “ Walkable ” on-base community...34 Walkable " on- base community design 1 " Walkable " community Design – on-base: clustering of facilities, presence of sidewalks, need for car...access to public transit LEED for Neighborhood Development (ND) 0-100 index based on score of walkable community indicators Adapt LEED-ND
Elementary Metric Curriculum - Project T.I.M.E. (Timely Implementation of Metric Education). Part I.
ERIC Educational Resources Information Center
Community School District 18, Brooklyn, NY.
This is a teacher's manual for an ISS-based elementary school course in the metric system. Behavioral objectives and student activities are included. The topics covered include: (1) linear measurement; (2) metric-decimal relationships; (3) metric conversions; (4) geometry; (5) scale drawings; and (6) capacity. This is the first of a two-part…
Value-based metrics and Internet-based enterprises
NASA Astrophysics Data System (ADS)
Gupta, Krishan M.
2001-10-01
Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
Design and application of process control charting methodologies to gamma irradiation practices
NASA Astrophysics Data System (ADS)
Saylor, M. C.; Connaghan, J. P.; Yeadon, S. C.; Herring, C. M.; Jordan, T. M.
2002-12-01
The relationship between the contract irradiation facility and the customer has historically been based upon a "PASS/FAIL" approach with little or no quality metrics used to gage the control of the irradiation process. Application of process control charts, designed in coordination with mathematical simulation of routine radiation processing, can provide a basis for understanding irradiation events. By using tools that simulate the physical rules associated with the irradiation process, end-users can explore process-related boundaries and the effects of process changes. Consequently, the relationship between contractor and customer can evolve based on the derived knowledge. The resulting level of mutual understanding of the irradiation process and its resultant control benefits both the customer and contract operation, and provides necessary assurances to regulators. In this article we examine the complementary nature of theoretical (point kernel) and experimental (dosimetric) process evaluation, and the resulting by-product of improved understanding, communication and control generated through the implementation of effective process control charting strategies.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
DOT National Transportation Integrated Search
2013-04-01
"This report provides a Quick Guide to the concept of asset sustainability metrics. Such metrics address the long-term performance of highway assets based upon expected expenditure levels. : It examines how such metrics are used in Australia, Britain...
DeJournett, Jeremy; DeJournett, Leon
2017-11-01
Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (<40 mg/dL), mild hypoglycemia (40-69 mg/dL), normoglycemia (70-140 mg/dL), hyperglycemia (>140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
Metric for evaluation of filter efficiency in spectral cameras.
Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani
2016-11-10
Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.
Metrication report to the Congress. 1991 activities and 1992 plans
NASA Technical Reports Server (NTRS)
1991-01-01
During 1991, NASA approved a revised metric use policy and developed a NASA Metric Transition Plan. This Plan targets the end of 1995 for completion of NASA's metric initiatives. This Plan also identifies future programs that NASA anticipates will use the metric system of measurement. Field installations began metric transition studies in 1991 and will complete them in 1992. Half of NASA's Space Shuttle payloads for 1991, and almost all such payloads for 1992, have some metric-based elements. In 1992, NASA will begin assessing requirements for space-quality piece parts fabricated to U.S. metric standards, leading to development and qualification of high priority parts.
Measuring β-diversity with species abundance data.
Barwell, Louise J; Isaac, Nick J B; Kunin, William E
2015-07-01
In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
A neural net-based approach to software metrics
NASA Technical Reports Server (NTRS)
Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.
1992-01-01
Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.
Genetic Programming for Automatic Hydrological Modelling
NASA Astrophysics Data System (ADS)
Chadalawada, Jayashree; Babovic, Vladan
2017-04-01
One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resources Research, 47(11).
Quality of service routing in the differentiated services framework
NASA Astrophysics Data System (ADS)
Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo
2001-02-01
In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.
Classification of Hamilton-Jacobi separation in orthogonal coordinates with diagonal curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajaratnam, Krishan, E-mail: k2rajara@uwaterloo.ca; McLenaghan, Raymond G., E-mail: rgmclenaghan@uwaterloo.ca
2014-08-15
We find all orthogonal metrics where the geodesic Hamilton-Jacobi equation separates and the Riemann curvature tensor satisfies a certain equation (called the diagonal curvature condition). All orthogonal metrics of constant curvature satisfy the diagonal curvature condition. The metrics we find either correspond to a Benenti system or are warped product metrics where the induced metric on the base manifold corresponds to a Benenti system. Furthermore, we show that most metrics we find are characterized by concircular tensors; these metrics, called Kalnins-Eisenhart-Miller metrics, have an intrinsic characterization which can be used to obtain them on a given space. In conjunction withmore » other results, we show that the metrics we found constitute all separable metrics for Riemannian spaces of constant curvature and de Sitter space.« less
Grading the Metrics: Performance-Based Funding in the Florida State University System
ERIC Educational Resources Information Center
Cornelius, Luke M.; Cavanaugh, Terence W.
2016-01-01
A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
The geometric nature of weights in real complex networks
NASA Astrophysics Data System (ADS)
Allard, Antoine; Serrano, M. Ángeles; García-Pérez, Guillermo; Boguñá, Marián
2017-01-01
The topology of many real complex networks has been conjectured to be embedded in hidden metric spaces, where distances between nodes encode their likelihood of being connected. Besides of providing a natural geometrical interpretation of their complex topologies, this hypothesis yields the recipe for sustainable Internet's routing protocols, sheds light on the hierarchical organization of biochemical pathways in cells, and allows for a rich characterization of the evolution of international trade. Here we present empirical evidence that this geometric interpretation also applies to the weighted organization of real complex networks. We introduce a very general and versatile model and use it to quantify the level of coupling between their topology, their weights and an underlying metric space. Our model accurately reproduces both their topology and their weights, and our results suggest that the formation of connections and the assignment of their magnitude are ruled by different processes.
Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform
NASA Astrophysics Data System (ADS)
Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo
2010-08-01
A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.
Atmospheric Science Data Center
2013-03-12
Metric Weights and Measures The metric system is based on 10s. For example, 10 millimeters = 1 centimeter, 10 ... Special Publications: NIST Guide to SI Units: Conversion Factors NIST Guide to SI Units: Conversion Factors listed ...
Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
NASA Astrophysics Data System (ADS)
Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Tide or Tsunami? The Impact of Metrics on Scholarly Research
ERIC Educational Resources Information Center
Bonnell, Andrew G.
2016-01-01
Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…
Term Based Comparison Metrics for Controlled and Uncontrolled Indexing Languages
ERIC Educational Resources Information Center
Good, B. M.; Tennis, J. T.
2009-01-01
Introduction: We define a collection of metrics for describing and comparing sets of terms in controlled and uncontrolled indexing languages and then show how these metrics can be used to characterize a set of languages spanning folksonomies, ontologies and thesauri. Method: Metrics for term set characterization and comparison were identified and…
Performance assessment in brain-computer interface-based augmentative and alternative communication
2013-01-01
A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems. PMID:23680020
Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M
2017-09-01
Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.
Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria
2015-01-01
Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence. PMID:26029135
Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria
2015-01-01
Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.
Evaluation of image deblurring methods via a classification metric
NASA Astrophysics Data System (ADS)
Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo
2012-09-01
The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.
Fusion set selection with surrogate metric in multi-atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-02-01
Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.
What Can Article-Level Metrics Do for You?
Fenner, Martin
2013-01-01
Article-level metrics (ALMs) provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, usage statistics, discussions in online comments and social media, social bookmarking, and recommendations. In this essay, we describe why article-level metrics are an important extension of traditional citation-based journal metrics and provide a number of example from ALM data collected for PLOS Biology. PMID:24167445
What can article-level metrics do for you?
Fenner, Martin
2013-10-01
Article-level metrics (ALMs) provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, usage statistics, discussions in online comments and social media, social bookmarking, and recommendations. In this essay, we describe why article-level metrics are an important extension of traditional citation-based journal metrics and provide a number of example from ALM data collected for PLOS Biology.
Checkpoint triggering in a computer system
Cher, Chen-Yong
2016-09-06
According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.
Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L
2013-12-01
Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Algal bioassessment metrics for wadeable streams and rivers of Maine, USA
Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth
2011-01-01
Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.
NASA Astrophysics Data System (ADS)
Boé, Julien; Terray, Laurent
2014-05-01
Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.
Cantuaria, Manuella Lech; Suh, Helen; Løfstrøm, Per; Blanes-Vidal, Victoria
2016-11-01
The assignment of exposure is one of the main challenges faced by environmental epidemiologists. However, misclassification of exposures has not been explored in population epidemiological studies on air pollution from biodegradable wastes. The objective of this study was to investigate the use of different approaches for assessing exposure to air pollution from biodegradable wastes by analyzing (1) the misclassification of exposure that is committed by using these surrogates, (2) the existence of differential misclassification (3) the effects that misclassification may have on health effect estimates and the interpretation of epidemiological results, and (4) the ability of the exposure measures to predict health outcomes using 10-fold cross validation. Four different exposure assessment approaches were studied: ammonia concentrations at the residence (Metric I), distance to the closest source (Metric II), number of sources within certain distances from the residence (Metric IIIa,b) and location in a specific region (Metric IV). Exposure-response models based on Metric I provided the highest predictive ability (72.3%) and goodness-of-fit, followed by IV, III and II. When compared to Metric I, Metric IV yielded the best results for exposure misclassification analysis and interpretation of health effect estimates, followed by Metric IIIb, IIIa and II. The study showed that modelled NH 3 concentrations provide more accurate estimations of true exposure than distances-based surrogates, and that distance-based surrogates (especially those based on distance to the closest point source) are imprecise methods to identify exposed populations, although they may be useful for initial studies. Copyright © 2016 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad
2015-11-01
In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.
DebtRank-transparency: Controlling systemic risk in financial networks
Thurner, Stefan; Poledna, Sebastian
2013-01-01
Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed. PMID:23712454
Pitch and time, tonality and meter: how do musical dimensions combine?
Prince, Jon B; Thompson, William F; Schmuckler, Mark A
2009-10-01
The authors examined how the structural attributes of tonality and meter influence musical pitch-time relations. Listeners heard a musical context followed by probe events that varied in pitch class and temporal position. Tonal and metric hierarchies contributed additively to the goodness-of-fit of probes, with pitch class exerting a stronger influence than temporal position (Experiment 1), even when listeners attempted to ignore pitch (Experiment 2). Speeded classification tasks confirmed this asymmetry. Temporal classification was biased by tonal stability (Experiment 3), but pitch classification was unaffected by temporal position (Experiment 4). Experiments 5 and 6 ruled out explanations based on the presence of pitch classes and temporal positions in the context, unequal stimulus quantity, and discriminability. The authors discuss how typical Western music biases attention toward pitch and distinguish between dimensional discriminability and salience. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Jiménez, Fernando; Sánchez, Gracia; Juárez, José M
2014-03-01
This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, D; Anderson, C; Mayo, C
Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinFormsmore » to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response models. Supported by NIH - P01 - CA059827.« less
Metrics for Radiologists in the Era of Value-based Health Care Delivery.
Sarwar, Ammar; Boland, Giles; Monks, Annamarie; Kruskal, Jonathan B
2015-01-01
Accelerated by the Patient Protection and Affordable Care Act of 2010, health care delivery in the United States is poised to move from a model that rewards the volume of services provided to one that rewards the value provided by such services. Radiology department operations are currently managed by an array of metrics that assess various departmental missions, but many of these metrics do not measure value. Regulators and other stakeholders also influence what metrics are used to assess medical imaging. Metrics such as the Physician Quality Reporting System are increasingly being linked to financial penalties. In addition, metrics assessing radiology's contribution to cost or outcomes are currently lacking. In fact, radiology is widely viewed as a contributor to health care costs without an adequate understanding of its contribution to downstream cost savings or improvement in patient outcomes. The new value-based system of health care delivery and reimbursement will measure a provider's contribution to reducing costs and improving patient outcomes with the intention of making reimbursement commensurate with adherence to these metrics. The authors describe existing metrics and their application to the practice of radiology, discuss the so-called value equation, and suggest possible metrics that will be useful for demonstrating the value of radiologists' services to their patients. (©)RSNA, 2015.
Moral empiricism and the bias for act-based rules.
Ayars, Alisabeth; Nichols, Shaun
2017-10-01
Previous studies on rule learning show a bias in favor of act-based rules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-based rules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-based rule or 3 consequence-based rules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-based rules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-based rules can be shaped by evidence from other rules supports the idea that the bias for act-based rules might be acquired as an overhypothesis from the preponderance of act-based rules. Copyright © 2017 Elsevier B.V. All rights reserved.
On the new metrics for IMRT QA verification.
Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo
2016-11-01
The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics. This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.
Using a safety forecast model to calculate future safety metrics.
DOT National Transportation Integrated Search
2017-05-01
This research sought to identify a process to improve long-range planning prioritization by using forecasted : safety metrics in place of the existing Utah Department of Transportation Safety Indexa metric based on historical : crash data. The res...
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Metrics Handbook (Air Force Systems Command)
NASA Astrophysics Data System (ADS)
1991-08-01
The handbook is designed to help one develop and use good metrics. It is intended to provide sufficient information to begin developing metrics for objectives, processes, and tasks, and to steer one toward appropriate actions based on the data one collects. It should be viewed as a road map to assist one in arriving at meaningful metrics and to assist in continuous process improvement.
ERIC Educational Resources Information Center
Community School District 18, Brooklyn, NY.
This is the second part of a two-part teacher's manual for an ISS-based elementary school course in the metric system. Behavioral objectives and student activities are included. Topics include: (1) capacity; (2) calculation of volume and surface area of cylinders and cones; (3) mass; (4) temperature; and (5) metric conversions. (BB)
Magic bases, metric ansaetze and generalized graph theories in the Virasoro master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halpern, M.B.; Obers, N.A.
1991-11-15
The authors define a class of magic Lie group bases in which the Virasoro master equation admits a class of simple metric ansaetze (g{sub metric}), whose structure is visible in the high-level expansion. When a magic basis is real on compact g, the corresponding g{sub metric} is a large system of unitary, generically irrational conformal field theories. Examples in this class include the graph-theory ansatz SO(n){sub diag} in the Cartesian basis of So(n) and the ansatz SU(n){sub metric} in the Pauli-like basis of SU(n). A new phenomenon is observed in the high-level comparison of SU(n){sub metric}: Due to the trigonometricmore » structure constants of the Pauli-like basis, irrational central charge is clearly visible at finite order of the expansion. They also define the sine-area graphs of SU(n), which label the conformal field theories of SU(n){sub metric} and note that, in a similar fashion, each magic basis of g defines a generalize graph theory on g which labels the conformal field theories of g{sub metric}.« less
The impact of natural products upon modern drug discovery.
Ganesan, A
2008-06-01
In the period 1970-2006, a total of 24 unique natural products were discovered that led to an approved drug. We analyze these successful leads in terms of drug-like properties, and show that they can be divided into two equal subsets. The first falls in the 'Lipinski universe' and complies with the Rule of Five. The second is a 'parallel universe' that violates the rules. Nevertheless, the latter compounds remain largely compliant in terms of logP and H-bond donors, highlighting the importance of these two metrics in predicting bioavailability. Natural products are often cited as an exception to Lipinski's rules. We believe this is because nature has learned to maintain low hydrophobicity and intermolecular H-bond donating potential when it needs to make biologically active compounds with high molecular weight and large numbers of rotatable bonds. In addition, natural products are more likely than purely synthetic compounds to resemble biosynthetic intermediates or endogenous metabolites, and hence take advantage of active transport mechanisms. Interestingly, the natural product leads in the Lipinski and parallel universe had an identical success rate (50%) in delivering an oral drug.
The data quality analyzer: A quality control program for seismic data
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.
2015-03-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
ChemicalTagger: A tool for semantic text-mining in chemistry.
Hawizy, Lezan; Jessop, David M; Adams, Nico; Murray-Rust, Peter
2011-05-16
The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision.
Mallidi, Srivalleesha; Anbil, Sriram; Lee, Seonkyung; Manstein, Dieter; Elrington, Stefan; Kositratna, Garuna; Schoenfeld, David; Pogue, Brian; Davis, Steven J; Hasan, Tayyaba
2014-02-01
The need for patient-specific photodynamic therapy (PDT) in dermatologic and oncologic applications has triggered several studies that explore the utility of surrogate parameters as predictive reporters of treatment outcome. Although photosensitizer (PS) fluorescence, a widely used parameter, can be viewed as emission from several fluorescent states of the PS (e.g., minimally aggregated and monomeric), we suggest that singlet oxygen luminescence (SOL) indicates only the active PS component responsible for the PDT. Here, the ability of discrete PS fluorescence-based metrics (absolute and percent PS photobleaching and PS re-accumulation post-PDT) to predict the clinical phototoxic response (erythema) resulting from 5-aminolevulinic acid PDT was compared with discrete SOL (DSOL)-based metrics (DSOL counts pre-PDT and change in DSOL counts pre/post-PDT) in healthy human skin. Receiver operating characteristic curve (ROC) analyses demonstrated that absolute fluorescence photobleaching metric (AFPM) exhibited the highest area under the curve (AUC) of all tested parameters, including DSOL based metrics. The combination of dose-metrics did not yield better AUC than AFPM alone. Although sophisticated real-time SOL measurements may improve the clinical utility of SOL-based dosimetry, discrete PS fluorescence-based metrics are easy to implement, and our results suggest that AFPM may sufficiently predict the PDT outcomes and identify treatment nonresponders with high specificity in clinical contexts.
Specification-based software sizing: An empirical investigation of function metrics
NASA Technical Reports Server (NTRS)
Jeffery, Ross; Stathis, John
1993-01-01
For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.
NASA Astrophysics Data System (ADS)
Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.
2014-09-01
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
NASA Technical Reports Server (NTRS)
Idris, Husni; Shen, Ni; Wing, David J.
2011-01-01
The growing demand for air travel is increasing the need for mitigating air traffic congestion and complexity problems, which are already at high levels. At the same time new surveillance, navigation, and communication technologies are enabling major transformations in the air traffic management system, including net-based information sharing and collaboration, performance-based access to airspace resources, and trajectory-based rather than clearance-based operations. The new system will feature different schemes for allocating tasks and responsibilities between the ground and airborne agents and between the human and automation, with potential capacity and cost benefits. Therefore, complexity management requires new metrics and methods that can support these new schemes. This paper presents metrics and methods for preserving trajectory flexibility that have been proposed to support a trajectory-based approach for complexity management by airborne or ground-based systems. It presents extensions to these metrics as well as to the initial research conducted to investigate the hypothesis that using these metrics to guide user and service provider actions will naturally mitigate traffic complexity. The analysis showed promising results in that: (1) Trajectory flexibility preservation mitigated traffic complexity as indicated by inducing self-organization in the traffic patterns and lowering traffic complexity indicators such as dynamic density and traffic entropy. (2)Trajectory flexibility preservation reduced the potential for secondary conflicts in separation assurance. (3) Trajectory flexibility metrics showed potential application to support user and service provider negotiations for minimizing the constraints imposed on trajectories without jeopardizing their objectives.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Develop metrics of tire debris on Texas highways : [project summary].
DOT National Transportation Integrated Search
2017-05-01
This research developed metrics on the amount and characteristics of tire debris generated on Texas highways. These metrics provide numerical, data-based rates for districts to anticipate the amounts and characteristics of tire debris and to plan rem...
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Localized Multi-Model Extremes Metrics for the Fourth National Climate Assessment
NASA Astrophysics Data System (ADS)
Thompson, T. R.; Kunkel, K.; Stevens, L. E.; Easterling, D. R.; Biard, J.; Sun, L.
2017-12-01
We have performed localized analysis of scenario-based datasets for the Fourth National Climate Assessment (NCA4). These datasets include CMIP5-based Localized Constructed Analogs (LOCA) downscaled simulations at daily temporal resolution and 1/16th-degree spatial resolution. Over 45 temperature and precipitation extremes metrics have been processed using LOCA data, including threshold, percentile, and degree-days calculations. The localized analysis calculates trends in the temperature and precipitation extremes metrics for relatively small regions such as counties, metropolitan areas, climate zones, administrative areas, or economic zones. For NCA4, we are currently addressing metropolitan areas as defined by U.S. Census Bureau Metropolitan Statistical Areas. Such localized analysis provides essential information for adaptation planning at scales relevant to local planning agencies and businesses. Nearly 30 such regions have been analyzed to date. Each locale is defined by a closed polygon that is used to extract LOCA-based extremes metrics specific to the area. For each metric, single-model data at each LOCA grid location are first averaged over several 30-year historical and future periods. Then, for each metric, the spatial average across the region is calculated using model weights based on both model independence and reproducibility of current climate conditions. The range of single-model results is also captured on the same localized basis, and then combined with the weighted ensemble average for each region and each metric. For example, Boston-area cooling degree days and maximum daily temperature is shown below for RCP8.5 (red) and RCP4.5 (blue) scenarios. We also discuss inter-regional comparison of these metrics, as well as their relevance to risk analysis for adaptation planning.
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
A scoring mechanism for the rank aggregation of network robustness
NASA Astrophysics Data System (ADS)
Yazdani, Alireza; Dueñas-Osorio, Leonardo; Li, Qilin
2013-10-01
To date, a number of metrics have been proposed to quantify inherent robustness of network topology against failures. However, each single metric usually only offers a limited view of network vulnerability to different types of random failures and targeted attacks. When applied to certain network configurations, different metrics rank network topology robustness in different orders which is rather inconsistent, and no single metric fully characterizes network robustness against different modes of failure. To overcome such inconsistency, this work proposes a multi-metric approach as the basis of evaluating aggregate ranking of network topology robustness. This is based on simultaneous utilization of a minimal set of distinct robustness metrics that are standardized so to give way to a direct comparison of vulnerability across networks with different sizes and configurations, hence leading to an initial scoring of inherent topology robustness. Subsequently, based on the inputs of initial scoring a rank aggregation method is employed to allocate an overall ranking of robustness to each network topology. A discussion is presented in support of the presented multi-metric approach and its applications to more realistically assess and rank network topology robustness.
Non-naturally reductive Einstein metrics on exceptional Lie groups
NASA Astrophysics Data System (ADS)
Chrysikos, Ioannis; Sakane, Yusuke
2017-06-01
Given an exceptional compact simple Lie group G we describe new left-invariant Einstein metrics which are not naturally reductive. In particular, we consider fibrations of G over flag manifolds with a certain kind of isotropy representation and we construct the Einstein equation with respect to the induced left-invariant metrics. Then we apply a technique based on Gröbner bases and classify the real solutions of the associated algebraic systems. For the Lie group G2 we obtain the first known example of a left-invariant Einstein metric, which is not naturally reductive. Moreover, for the Lie groups E7 and E8, we conclude that there exist non-isometric non-naturally reductive Einstein metrics, which are Ad(K) -invariant by different Lie subgroups K.
Spielman, David J; Kennedy, Adam
2016-09-01
Since the 1980s, many developing countries have introduced policies to promote seed industry growth and improve the delivery of modern science to farmers, often with a long-term goal of increasing agricultural productivity in smallholder farming systems. Public, private, and civil society actors involved in shaping policy designs have, in turn, developed competing narratives around how best to build an innovative and sustainable seed system, each with varying goals, values, and levels of influence. Efforts to strike a balance between these narratives have often played out in passionate discourses surrounding seed rules and regulations. As a result, however, policymakers in many countries have expressed impatience with the slow progress on enhancing the contribution of a modern seed industry to the overarching goal of increasing agricultural productivity growth. One reason for this slow progress may be that policymakers are insufficiently cognizant of the trade-offs associated with rules and regulations required to effectively govern a modern seed industry. This suggests the need for new data and analysis to improve the understanding of how seed systems function. This paper explores these issues in the context of Asia's rapidly growing seed industry, with illustrations from seed markets for maize and several other crops, to highlight current gaps in the metrics used to analyze performance, competition, and innovation. The paper provides a finite set of indicators to inform policymaking on seed system design and monitoring, and explores how these indicators can be used to inform current policy debates in the region.
Dolan, Robert W; Nesto, Richard; Ellender, Stacey; Luccessi, Christopher
Hospitals and healthcare systems are introducing incentive metrics into compensation plans that align with value-based payment methodologies. These incentive measures should be considered a practical application of the transition from volume to value and will likely replace traditional productivity-based compensation in the future. During the transition, there will be provider resistance and implementation challenges. This article examines a large multispecialty group's experience with a newly implemented incentive compensation plan including the structure of the plan, formulas for calculation of the payments, the mix of quality and productivity metrics, and metric threshold achievement. Three rounds of surveys with comments were collected to measure knowledge and attitudes regarding the plan. Lessons learned and specific recommendations for success are described. The participant's knowledge and attitudes regarding the plan are important considerations and affect morale and engagement. Significant provider dissatisfaction with the plan was found. Careful metric selection, design, and management are critical activities that will facilitate provider acceptance and support. Improvements in data collection and reporting will be needed to produce reliable metrics that can supplant traditional volume-based productivity measures.
MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOLLEN, JOHAN; RODRIGUEZ, MARKO A.; VAN DE SOMPEL, HERBERT
2007-01-30
The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process.more » The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.« less
Caverzagie, Kelly J; Lane, Susan W; Sharma, Niraj; Donnelly, John; Jaeger, Jeffrey R; Laird-Fick, Heather; Moriarty, John P; Moyer, Darilyn V; Wallach, Sara L; Wardrop, Richard M; Steinmann, Alwin F
2017-12-12
Graduate medical education (GME) in the United States is financed by contributions from both federal and state entities that total over $15 billion annually. Within institutions, these funds are distributed with limited transparency to achieve ill-defined outcomes. To address this, the Institute of Medicine convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society's current and future needs. The resulting report provided several recommendations regarding the oversight and mechanisms of GME funding, including implementation of performance-based GME payments, but did not provide specific details about the content and development of metrics for these payments. To initiate a national conversation about performance-based GME funding, the authors asked: What should GME be held accountable for in exchange for public funding? In answer to this question, the authors propose 17 potential performance-based metrics for GME funding that could inform future funding decisions. Eight of the metrics are described as exemplars to add context and to help readers obtain a deeper understanding of the inherent complexities of performance-based GME funding. The authors also describe considerations and precautions for metric implementation.
NASA Astrophysics Data System (ADS)
Heudorfer, Benedikt; Haaf, Ezra; Barthel, Roland; Stahl, Kerstin
2017-04-01
A new framework for quantification of groundwater dynamics has been proposed in a companion study (Haaf et al., 2017). In this framework, a number of conceptual aspects of dynamics, such as seasonality, regularity, flashiness or inter-annual forcing, are described, which are then linked to quantitative metrics. Hereby, a large number of possible metrics are readily available from literature, such as Pardé Coefficients, Colwell's Predictability Indices or Base Flow Index. In the present work, we focus on finding multicollinearity and in consequence redundancy among the metrics representing different patterns of dynamics found in groundwater hydrographs. This is done also to verify the categories of dynamics aspects suggested by Haaf et al., 2017. To determine the optimal set of metrics we need to balance the desired minimum number of metrics and the desired maximum descriptive property of the metrics. To do this, a substantial number of candidate metrics are applied to a diverse set of groundwater hydrographs from France, Germany and Austria within the northern alpine and peri-alpine region. By applying Principle Component Analysis (PCA) to the correlation matrix of the metrics, we determine a limited number of relevant metrics that describe the majority of variation in the dataset. The resulting reduced set of metrics comprise an optimized set that can be used to describe the aspects of dynamics that were identified within the groundwater dynamics framework. For some aspects of dynamics a single significant metric could be attributed. Other aspects have a more fuzzy quality that can only be described by an ensemble of metrics and are re-evaluated. The PCA is furthermore applied to groups of groundwater hydrographs containing regimes of similar behaviour in order to explore transferability when applying the metric-based characterization framework to groups of hydrographs from diverse groundwater systems. In conclusion, we identify an optimal number of metrics, which are readily available for usage in studies on groundwater dynamics, intended to help overcome analytical limitations that exist due to the complexity of groundwater dynamics. Haaf, E., Heudorfer, B., Stahl, K., Barthel, R., 2017. A framework for quantification of groundwater dynamics - concepts and hydro(geo-)logical metrics. EGU General Assembly 2017, Vienna, Austria.
Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
2015-05-01
Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The Advanced Life Support (ALS) has used a single number, Equivalent System Mass (ESM), for both reporting progress and technology selection. ESM is the launch mass required to provide a space system. ESM indicates launch cost. ESM alone is inadequate for technology selection, which should include other metrics such as Technology Readiness Level (TRL) and Life Cycle Cost (LCC) and also consider perfom.arxe 2nd risk. ESM has proven difficult to implement as a reporting metric, partly because it includes non-mass technology selection factors. Since it will not be used exclusively for technology selection, a new reporting metric can be made easier to compute and explain. Systems design trades-off performance, cost, and risk, but a risk weighted cost/benefit metric would be too complex to report. Since life support has fixed requirements, different systems usually have roughly equal performance. Risk is important since failure can harm the crew, but it is difficult to treat simply. Cost is not easy to estimate, but preliminary space system cost estimates are usually based on mass, which is better estimated than cost. Amass-based cost estimate, similar to ESM, would be a good single reporting metric. The paper defines and compares four mass-based cost estimates, Equivalent Mass (EM), Equivalent System Mass (ESM), Life Cycle Mass (LCM), and System Mass (SM). EM is traditional in life support and includes mass, volume, power, cooling and logistics. ESM is the specifically defined ALS metric, which adds crew time and possibly other cost factors to EM. LCM is a new metric, a mass-based estimate of LCC measured in mass units. SM includes only the factors of EM that are originally measured in mass, the hardware and logistics mass. All four mass-based metrics usually give similar comparisons. SM is by far the simplest to compute and easiest to explain.
A defect-driven diagnostic method for machine tool spindles
Vogl, Gregory W.; Donmez, M. Alkan
2016-01-01
Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition. PMID:28065985
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Evaluating true BCI communication rate through mutual information and language models.
Speier, William; Arnold, Corey; Pouratian, Nader
2013-01-01
Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.
Information risk and security modeling
NASA Astrophysics Data System (ADS)
Zivic, Predrag
2005-03-01
This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.
Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M
2008-06-01
Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
Angermeier, P.L.; Davideanu, G.
2004-01-01
Multimetric biotic indices increasingly are used to complement physicochemical data in assessments of stream quality. We initiated development of multimetric indices, based on fish communities, to assess biotic integrity of streams in two physiographic regions of central Romania. Unlike previous efforts to develop such indices for European streams, our metrics and scoring criteria were selected largely on the basis of empirical relations in the regions of interest. We categorised 54 fish species with respect to ten natural-history attributes, then used this information to compute 32 candidate metrics of five types (taxonomic, tolerance, abundance, reproductive, and feeding) for each of 35 sites. We assessed the utility of candidate metrics for detecting anthropogenic impact based on three criteria: (a) range of values taken, (b) relation to a site-quality index (SQI), which incorporated information on hydrologic alteration, channel alteration, land-use intensity, and water chemistry, and (c) metric redundancy. We chose seven metrics from each region to include in preliminary multimetric indices (PMIs). Both PMIs included taxonomic, tolerance, and feeding metrics, but only two metrics were common to both PMIs. Although we could not validate our PMIs, their strong association with the SQI in each region suggests that such indices would be valuable tools for assessing stream quality and could provide more comprehensive assessments than the traditional approaches based solely on water chemistry.
Young, Laura K; Love, Gordon D; Smithson, Hannah E
2013-09-20
Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
GPS Device Testing Based on User Performance Metrics
DOT National Transportation Integrated Search
2015-10-02
1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs
A Methodology to Analyze Photovoltaic Tracker Uptime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew T; Ruth, Dan
A metric is developed to analyze the daily performance of single-axis photovoltaic (PV) trackers. The metric relies on comparing correlations between the daily time series of the PV power output and an array of simulated plane-of-array irradiances for the given day. Mathematical thresholds and a logic sequence are presented, so the daily tracking metric can be applied in an automated fashion on large-scale PV systems. The results of applying the metric are visually examined against the time series of the power output data for a large number of days and for various systems. The visual inspection results suggest that overall,more » the algorithm is accurate in identifying stuck or functioning trackers on clear-sky days. Visual inspection also shows that there are days that are not classified by the metric where the power output data may be sufficient to identify a stuck tracker. Based on the daily tracking metric, uptime results are calculated for 83 different inverters at 34 PV sites. The mean tracker uptime is calculated at 99% based on 2 different calculation methods. The daily tracking metric clearly has limitations, but as there is no existing metrics in the literature, it provides a valuable tool for flagging stuck trackers.« less
Performance regression manager for large scale systems
Faraj, Daniel A.
2017-10-17
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
Carroll, John A; Smith, Helen E; Scott, Donia; Cassell, Jackie A
2016-01-01
Background Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall). PMID:26911811
Model based high NA anamorphic EUV RET
NASA Astrophysics Data System (ADS)
Jiang, Fan; Wiaux, Vincent; Fenger, Germain; Clifford, Chris; Liubich, Vlad; Hendrickx, Eric
2018-03-01
With the announcement of the extension of the Extreme Ultraviolet (EUV) roadmap to a high NA lithography tool that utilizes anamorphic optics design, an investigation of design tradeoffs unique to the imaging of anamorphic lithography tool is shown. An anamorphic optical proximity correction (OPC) solution has been developed that models fully the EUV near field electromagnetic effects and the anamorphic imaging using the Domain Decomposition Method (DDM). Clips of imec representative for the N3 logic node were used to demonstrate the OPC solutions on critical layers that will benefit from the increased contrast at high NA using anamorphic imaging. However, unlike isomorphic case, from wafer perspective, OPC needs to treat x and y differently. In the paper, we show a design trade-off seen unique to Anamorphic EUV, namely that using a mask rule of 48nm (mask scale), approaching current state of the art, limitations are observed in the available correction that can be applied to the mask. The metal pattern has a pitch of 24nm and CD of 12nm. During OPC, the correction of the metal lines oriented vertically are being limited by the mask rule of 12nm 1X. The horizontally oriented lines do not suffer from this mask rule limitation as the correction is allowed to go to 6nm 1X. For this example, the masks rules will need to be more aggressive to allow complete correction, or design rules and wafer processes (wafer rotation) would need to be created that utilize the orientation that can image more aggressive features. When considering VIA or block level correction, aggressive polygon corner to corner designs can be handled with various solutions, including applying a 45 degree chop. Multiple solutions are discussed with the metrics of edge placement error (EPE) and Process Variation Bands (PVBands), together with all the mask constrains. Noted in anamorphic OPC, the 45 degree chop is maintained at the mask level to meet mask manufacturing constraints, but results in skewed angle edge in wafer level correction. In this paper, we used both contact (Via/block) patterns and metal patterns for OPC practice. By comparing the EPE of horizontal and vertical patterns with a fixed mask rule check (MRC), and the PVBand, we focus on the challenges and the solutions of OPC with anamorphic High-NA lens.
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
T-duality invariant effective actions at orders α', α'2
NASA Astrophysics Data System (ADS)
Razaghian, Hamid; Garousi, Mohammad R.
2018-02-01
We use compatibility of the D-dimensional effective actions for diagonal metric and for dilaton with the T-duality when theory is compactified on a circle, to find the D-dimensional couplings of curvatures and dilaton as well as the higher derivative corrections to the ( D - 1)-dimensional Buscher rules at orders α' and α'2. We observe that the T-duality constraint on the effective actions fixes the covariant effective actions at each order of α' up to field redefinitions and up to an overall factor. Inspired by these results, we speculate that the D-dimensional effective actions at any order of α' must be consistent with the standard Buscher rules provided that one uses covariant field redefinitions in the corresponding reduced ( D - 1)-dimensional effective actions. This constraint may be used to find effective actions at all higher orders of α'.
This presentation is comprised of two sustainability metrics that have been developed for the Chicago Metropolitan Area under SHC research program. The first sustainability metrics is Ecological Foot Print Analysis. Ecological Footprint Analysis (EFA) has been extensively deploy...
Performance metrics for the assessment of satellite data products: an ocean color case study
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...
A Survey of Solver-Related Geometry and Meshing Issues
NASA Technical Reports Server (NTRS)
Masters, James; Daniel, Derick; Gudenkauf, Jared; Hine, David; Sideroff, Chris
2016-01-01
There is a concern in the computational fluid dynamics community that mesh generation is a significant bottleneck in the CFD workflow. This is one of several papers that will help set the stage for a moderated panel discussion addressing this issue. Although certain general "rules of thumb" and a priori mesh metrics can be used to ensure that some base level of mesh quality is achieved, inadequate consideration is often given to the type of solver or particular flow regime on which the mesh will be utilized. This paper explores how an analyst may want to think differently about a mesh based on considerations such as if a flow is compressible vs. incompressible or hypersonic vs. subsonic or if the solver is node-centered vs. cell-centered. This paper is a high-level investigation intended to provide general insight into how considering the nature of the solver or flow when performing mesh generation has the potential to increase the accuracy and/or robustness of the solution and drive the mesh generation process to a state where it is no longer a hindrance to the analysis process.
Cross-linguistic evidence for memory storage costs in filler-gap dependencies with wh-adjuncts
Stepanov, Arthur; Stateva, Penka
2015-01-01
This study investigates processing of interrogative filler-gap dependencies in which the filler integration site or gap is not directly subcategorized by the verb. This is the case when the wh-filler is a structural adjunct such as how or when rather than subject or object. Two self-paced reading experiments in English and Slovenian provide converging cross-linguistic evidence that wh-adjuncts elicit a kind of memory storage cost similar to that previously shown in the literature for wh-arguments. Experiment 1 investigates the storage costs elicited by the adjunct when in Slovenian, and Experiment 2 the storage costs elicited by how quickly and why in English. The results support the class of theories of storage costs based on the metric in terms of incomplete phrase structure rules or incomplete syntactic head predictions. We also demonstrate that the endpoint of the storage cost for a wh-adjunct filler provides valuable processing evidence for its base structural position, the identification of which remains a rather murky issue in current grammatical research. PMID:26388806
Paradigm Change: Alternate Approaches to Constitutive and Necking Models for Sheet Metal Forming
NASA Astrophysics Data System (ADS)
Stoughton, Thomas B.; Yoon, Jeong Whan
2011-08-01
This paper reviews recent work proposing paradigm changes for the currently popular approach to constitutive and failure modeling, focusing on the use of non-associated flow rules to enable greater flexibility to capture the anisotropic yield and flow behavior of metals using less complex functions than those needed under associated flow to achieve that same level of fidelity to experiment, and on the use of stress-based metrics to more reliably predict necking limits under complex conditions of non-linear forming. The paper discusses motivating factors and benefits in favor of both associated and non-associated flow models for metal forming, including experimental, theoretical, and practical aspects. This review is followed by a discussion of the topic of the forming limits, the limitations of strain analysis, the evidence in favor of stress analysis, the effects of curvature, bending/unbending cycles, triaxial stress conditions, and the motivation for the development of a new type of forming limit diagram based on the effective plastic strain or equivalent plastic work in combination with a directional parameter that accounts for the current stress condition.
Automated support for experience-based software management
NASA Technical Reports Server (NTRS)
Valett, Jon D.
1992-01-01
To effectively manage a software development project, the software manager must have access to key information concerning a project's status. This information includes not only data relating to the project of interest, but also, the experience of past development efforts within the environment. This paper describes the concepts and functionality of a software management tool designed to provide this information. This tool, called the Software Management Environment (SME), enables the software manager to compare an ongoing development effort with previous efforts and with models of the 'typical' project within the environment, to predict future project status, to analyze a project's strengths and weaknesses, and to assess the project's quality. In order to provide these functions the tool utilizes a vast corporate memory that includes a data base of software metrics, a set of models and relationships that describe the software development environment, and a set of rules that capture other knowledge and experience of software managers within the environment. Integrating these major concepts into one software management tool, the SME is a model of the type of management tool needed for all software development organizations.
Griffith, Michael B; Lazorchak, James M; Herlihy, Alan T
2004-07-01
If bioassessments are to help diagnose the specific environmental stressors affecting streams, a better understanding is needed of the relationships between community metrics and ambient criteria or ambient bioassays. However, this relationship is not simple, because metrics assess responses at the community level of biological organization, while ambient criteria and ambient bioassays assess or are based on responses at the individual level. For metals, the relationship is further complicated by the influence of other chemical variables, such as hardness, on their bioavailability and toxicity. In 1993 and 1994, U.S. Environmental Protection Agency (U.S. EPA) conducted a Regional Environmental Monitoring and Assessment Program (REMAP) survey on wadeable streams in Colorado's (USA) Southern Rockies Ecoregion. In this ecoregion, mining over the past century has resulted in metals contamination of streams. The surveys collected data on fish and macroinvertebrate assemblages, physical habitat, and sediment and water chemistry and toxicity. These data provide a framework for assessing diagnostic community metrics for specific environmental stressors. We characterized streams as metals-affected based on exceedence of hardness-adjusted criteria for cadmium, copper, lead, and zinc in water; on water toxicity tests (48-h Pimephales promelas and Ceriodaphnia dubia survival); on exceedence of sediment threshold effect levels (TELs); or on sediment toxicity tests (7-d Hyalella azteca survival and growth). Macroinvertebrate and fish metrics were compared among affected and unaffected sites to identify metrics sensitive to metals. Several macroinvertebrate metrics, particularly richness metrics, were less in affected streams, while other metrics were not. This is a function of the sensitivity of the individual metrics to metals effects. Fish metrics were less sensitive to metals because of the low diversity of fish in these streams.
Metrics of Justice. A Sundial's Nomological Figuration.
Behrmann, Carolin
2015-01-01
This paper examines a polyhedral dial from the British Museum made by the instrument maker Ulrich Schniep, and discusses the status of multifunctional scientific instruments. It discerns a multifaceted iconic meaning considering different dimensions such as scientific functionality (astronomy), the complex allegorical figure of Justice (iconography), and the representation of the sovereign (politics), the court and the Kunstkammer of Albrecht v of Bavaria. As a numen mixtum the figure of "Justicia" touches different fields that go far beyond pure astronomical measurement and represents the power of the ruler as well as the rules of economic justice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.D.; Lau, E.L.; Turyshev, S.G.
Radio metric data from the Pioneer 10/11, Galileo, and Ulysses spacecraft indicate an apparent anomalous, constant, acceleration acting on the spacecraft with a magnitude {approximately}8.5{times}10{sup {minus}8} cm/s{sup 2} , directed towards the Sun. Two independent codes and physical strategies have been used to analyze the data. A number of potential causes have been ruled out. We discuss future kinematic tests and possible origins of the signal. {copyright} {ital 1998} {ital The American Physical Society}
Riato, Luisa; Leira, Manel; Della Bella, Valentina; Oberholster, Paul J
2018-01-15
Acid mine drainage (AMD) from coal mining in the Mpumalanga Highveld region of South Africa has caused severe chemical and biological degradation of aquatic habitats, specifically depressional wetlands, as mines use these wetlands for storage of AMD. Diatom-based multimetric indices (MMIs) to assess wetland condition have mostly been developed to assess agricultural and urban land use impacts. No diatom MMI of wetland condition has been developed to assess AMD impacts related to mining activities. Previous approaches to diatom-based MMI development in wetlands have not accounted for natural variability. Natural variability among depressional wetlands may influence the accuracy of MMIs. Epiphytic diatom MMIs sensitive to AMD were developed for a range of depressional wetland types to account for natural variation in biological metrics. For this, we classified wetland types based on diatom typologies. A range of 4-15 final metrics were selected from a pool of ~140 candidate metrics to develop the MMIs based on their: (1) broad range, (2) high separation power and (3) low correlation among metrics. Final metrics were selected from three categories: similarity to reference sites, functional groups, and taxonomic composition, which represent different aspects of diatom assemblage structure and function. MMI performances were evaluated according to their precision in distinguishing reference sites, responsiveness to discriminate reference and disturbed sites, sensitivity to human disturbances and relevancy to AMD-related stressors. Each MMI showed excellent discriminatory power, whether or not it accounted for natural variation. However, accounting for variation by grouping sites based on diatom typologies improved overall performance of MMIs. Our study highlights the usefulness of diatom-based metrics and provides a model for the biological assessment of depressional wetland condition in South Africa and elsewhere. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Probability of Loss of Crew Achievability Studies for NASA's Exploration Systems Development
NASA Technical Reports Server (NTRS)
Boyer, Roger L.; Bigler, Mark A.; Rogers, James H.
2015-01-01
Over the last few years, NASA has been evaluating various vehicle designs for multiple proposed design reference missions (DRM) beyond low Earth orbit in support of its Exploration Systems Development (ESD) programs. This paper addresses several of the proposed missions and the analysis techniques used to assess the key risk metric, probability of loss of crew (LOC). Probability of LOC is a metric used to assess the safety risk as well as a design requirement. These assessments or studies were categorized as LOC achievability studies to help inform NASA management as to what "ball park" estimates of probability of LOC could be achieved for each DRM and were eventually used to establish the corresponding LOC requirements. Given that details of the vehicles and mission are not well known at this time, the ground rules, assumptions, and consistency across the programs become the important basis of the assessments as well as for the decision makers to understand.
New Objective Refraction Metric Based on Sphere Fitting to the Wavefront
Martínez-Finkelshtein, Andreí
2017-01-01
Purpose To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. Methods A sphere is fitted to the ocular wavefront at the center and at a variable distance, t. The optimal fitting distance, topt, is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r0, and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. Results For pupil radii r0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size. PMID:29104804
New Objective Refraction Metric Based on Sphere Fitting to the Wavefront.
Jaskulski, Mateusz; Martínez-Finkelshtein, Andreí; López-Gil, Norberto
2017-01-01
To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. A sphere is fitted to the ocular wavefront at the center and at a variable distance, t . The optimal fitting distance, t opt , is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r 0 , and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. For pupil radii r 0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r 0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size.
Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K
2015-10-01
Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
On Applying the Prognostic Performance Metrics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
Comparison of Human Exploration Architecture and Campaign Approaches
NASA Technical Reports Server (NTRS)
Goodliff, Kandyce; Cirillo, William; Mattfeld, Bryan; Stromgren, Chel; Shyface, Hilary
2015-01-01
As part of an overall focus on space exploration, National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). In addition, various external organizations are studying options for beyond LEO exploration. Recent studies include NASA's Evolvable Mars Campaign and Design Reference Architecture (DRA) 5.0, JPL's Minimal Mars Architecture; the Inspiration Mars mission; the Mars One campaign; and the Global Exploration Roadmap (GER). Each of these potential exploration constructs applies unique methods, architectures, and philosophies for human exploration. It is beneficial to compare potential approaches in order to better understand the range of options available for exploration. Since most of these studies were conducted independently, the approaches, ground rules, and assumptions used to conduct the analysis differ. In addition, the outputs and metrics presented for each construct differ substantially. This paper will describe the results of an effort to compare and contrast the results of these different studies under a common set of metrics. The paper will first present a summary of each of the proposed constructs, including a description of the overall approach and philosophy for exploration. Utilizing a common set of metrics for comparison, the paper will present the results of an evaluation of the potential benefits, critical challenges, and uncertainties associated with each construct. The analysis framework will include a detailed evaluation of key characteristics of each construct. These will include but are not limited to: a description of the technology and capability developments required to enable the construct and the uncertainties associated with these developments; an analysis of significant operational and programmatic risks associated with that construct; and an evaluation of the extent to which exploration is enabled by the construct, including the destinations visited and the exploration capabilities provided at those destinations. Based upon the comparison of constructs, the paper will identify trends and lessons learned across all of the candidate studies.
SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diamant, A; Ybarra, N; Seuntjens, J
2016-06-15
Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigatedmore » a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible to accurately model the prognosis of an individual lung SBRT patient’s treatment.« less
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
Efficient discovery of risk patterns in medical data.
Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul
2009-01-01
This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.
A heuristic way of obtaining the Kerr metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enderlein, J.
1997-09-01
An intuitive, straightforward way of finding the metric of a rotating black hole is presented, based on the algebra of differential forms. The representation obtained for the metric displays a simplicity which is not obvious in the usual Boyer{endash}Lindquist coordinates. {copyright} {ital 1997 American Association of Physics Teachers.}
NASA Astrophysics Data System (ADS)
Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René
2008-03-01
We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.
Empirical Evaluation of Hunk Metrics as Bug Predictors
NASA Astrophysics Data System (ADS)
Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz
Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.
Yang, Shiying; Yang, Siyu; Kraslawski, Andrzej; Qian, Yu
2013-12-17
Ecologically based life cycle assessment (Eco-LCA) is an appealing approach for the evaluation of resources utilization and environmental impacts of the process industries from an ecological scale. However, the aggregated metrics of Eco-LCA suffer from some drawbacks: the environmental impact metric has limited applicability; the resource utilization metric ignores indirect consumption; the renewability metric fails to address the quantitative distinction of resources availability; the productivity metric seems self-contradictory. In this paper, the existing Eco-LCA metrics are revised and extended for sustainability assessment of the energy and chemical processes. A new Eco-LCA metrics system is proposed, including four independent dimensions: environmental impact, resource utilization, resource availability, and economic effectiveness. An illustrative example of comparing assessment between a gas boiler and a solar boiler process provides insight into the features of the proposed approach.
Application of Domain Knowledge to Software Quality Assurance
NASA Technical Reports Server (NTRS)
Wild, Christian W.
1997-01-01
This work focused on capturing, using, and evolving a qualitative decision support structure across the life cycle of a project. The particular application of this study was towards business process reengineering and the representation of the business process in a set of Business Rules (BR). In this work, we defined a decision model which captured the qualitative decision deliberation process. It represented arguments both for and against proposed alternatives to a problem. It was felt that the subjective nature of many critical business policy decisions required a qualitative modeling approach similar to that of Lee and Mylopoulos. While previous work was limited almost exclusively to the decision capture phase, which occurs early in the project life cycle, we investigated the use of such a model during the later stages as well. One of our significant developments was the use of the decision model during the operational phase of a project. By operational phase, we mean the phase in which the system or set of policies which were earlier decided are deployed and put into practice. By making the decision model available to operational decision makers, they would have access to the arguments pro and con for a variety of actions and can thus make a more informed decision which balances the often conflicting criteria by which the value of action is measured. We also developed the concept of a 'monitored decision' in which metrics of performance were identified during the decision making process and used to evaluate the quality of that decision. It is important to monitor those decision which seem at highest risk of not meeting their stated objectives. Operational decisions are also potentially high risk decisions. Finally, we investigated the use of performance metrics for monitored decisions and audit logs of operational decisions in order to feed an evolutionary phase of the the life cycle. During evolution, decisions are revisisted, assumptions verified or refuted, and possible reassessments resulting in new policy are made. In this regard we implemented a machine learning algorithm which automatically defined business rules based on expert assessment of the quality of operational decisions as recorded during deployment.
NASA Astrophysics Data System (ADS)
Buldyreva, Jeanna
2013-06-01
Reliable modeling of radiative transfer in planetary atmospheres requires accounting for the collisional line mixing effects in the regions of closely spaced vibrotational lines as well as in the spectral wings. Because of too high CPU cost of calculations from ab initio potential energy surfaces (if available), the relaxation matrix describing the influence of collisions is usually built by dynamical scaling laws, such as Energy-Corrected Sudden law. Theoretical approaches currently used for calculation of absorption near the band center are based on the impact approximation (Markovian collisions without memory effects) and wings are modeled via introducing some empirical parameters [1,2]. Operating with the traditional non-symmetric metric in the Liouville space, these approaches need corrections of the ECS-modeled relaxation matrix elements ("relaxation times" and "renormalization procedure") in order to ensure the fundamental relations of detailed balance and sum rules.We present an extension to the infrared absorption case of the previously developed [3] for rototranslational Raman scattering spectra of linear molecules non-Markovian approach of ECS-type. Owing to the specific choice of symmetrized metric in the Liouville space, the relaxation matrix is corrected for initial bath-molecule correlations and satisfies non-Markovian sum rules and detailed balance. A few standard ECS parameters determined by fitting to experimental linewidths of the isotropic Q-branch enable i) retrieval of these isolated-line parameters for other spectroscopies (IR absorption and anisotropic Raman scattering); ii) reproducing of experimental intensities of these spectra. Besides including vibrational angular momenta in the IR bending shapes, Coriolis effects are also accounted for. The efficiency of the method is demonstrated on OCS-He and CO_2-CO_2 spectra up to 300 and 60 atm, respectively. F. Niro, C. Boulet, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 88, 483 (2004). H. Tran, C. Boulet, S. Stefani, M. Snels, and G. Piccioni, J. Quant. Spectrosc. Radiat. Transf. 112, 925 (2011). J. Buldyreva and L. Bonamy, Phys. Rev. A 60, 370-376 (1999).
Metrication report to the Congress
NASA Technical Reports Server (NTRS)
1989-01-01
The major NASA metrication activity of 1988 concerned the Space Station. Although the metric system was the baseline measurement system for preliminary design studies, solicitations for final design and development of the Space Station Freedom requested use of the inch-pound system because of concerns with cost impact and potential safety hazards. Under that policy, however use of the metric system would be permitted through waivers where its use was appropriate. Late in 1987, several Department of Defense decisions were made to increase commitment to the metric system, thereby broadening the potential base of metric involvement in the U.S. industry. A re-evaluation of Space Station Freedom units of measure policy was, therefore, initiated in January 1988.
Effects of metric change on safety in the workplace for selected occupations
NASA Astrophysics Data System (ADS)
Lefande, J. M.; Pokorney, J. L.
1982-04-01
The study assesses the potential safety issues of metric conversion in the workplace. A purposive sample of 35 occupations based on injury and illnesses indexes were assessed. After an analysis of workforce population, hazard analysis and measurement sensitivity of the occupations, jobs were analyzed to identify potential safety hazards by industrial hygienists, safety engineers and academia. The study's major findings were as follows: No metric hazard experience was identified. An increased exposure might occur when particular jobs and their job tasks are going the transition from customary measurement to metric measurement. Well planned metric change programs reduce hazard potential. Metric safety issues are unresolved in the aviation industry.
The data quality analyzer: a quality control program for seismic data
Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam
2015-01-01
The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
Emergent rules for codon choice elucidated by editing rare arginine codons in Escherichia coli
Napolitano, Michael G.; Landon, Matthieu; Gregg, Christopher J.; Lajoie, Marc J.; Govindarajan, Lakshmi; Mosberg, Joshua A.; Kuznetsov, Gleb; Goodman, Daniel B.; Vargas-Rodriguez, Oscar; Isaacs, Farren J.; Söll, Dieter; Church, George M.
2016-01-01
The degeneracy of the genetic code allows nucleic acids to encode amino acid identity as well as noncoding information for gene regulation and genome maintenance. The rare arginine codons AGA and AGG (AGR) present a case study in codon choice, with AGRs encoding important transcriptional and translational properties distinct from the other synonymous alternatives (CGN). We created a strain of Escherichia coli with all 123 instances of AGR codons removed from all essential genes. We readily replaced 110 AGR codons with the synonymous CGU codons, but the remaining 13 “recalcitrant” AGRs required diversification to identify viable alternatives. Successful replacement codons tended to conserve local ribosomal binding site-like motifs and local mRNA secondary structure, sometimes at the expense of amino acid identity. Based on these observations, we empirically defined metrics for a multidimensional “safe replacement zone” (SRZ) within which alternative codons are more likely to be viable. To evaluate synonymous and nonsynonymous alternatives to essential AGRs further, we implemented a CRISPR/Cas9-based method to deplete a diversified population of a wild-type allele, allowing us to evaluate exhaustively the fitness impact of all 64 codon alternatives. Using this method, we confirmed the relevance of the SRZ by tracking codon fitness over time in 14 different genes, finding that codons that fall outside the SRZ are rapidly depleted from a growing population. Our unbiased and systematic strategy for identifying unpredicted design flaws in synthetic genomes and for elucidating rules governing codon choice will be crucial for designing genomes exhibiting radically altered genetic codes. PMID:27601680
Raman, Ritu; Mitchell, Marlon; Perez-Pinera, Pablo; Bashir, Rashid; DeStefano, Lizanne
2016-01-01
The rapidly evolving discipline of biological and biomedical engineering requires adaptive instructional approaches that teach students to target and solve multi-pronged and ill-structured problems at the cutting edge of scientific research. Here we present a modular approach to designing a lab-based course in the emerging field of biofabrication and biological design, leading to a final capstone design project that requires students to formulate and test a hypothesis using the scientific method. Students were assessed on a range of metrics designed to evaluate the format of the course, the efficacy of the format for teaching new topics and concepts, and the depth of the contribution this course made to students training for biological engineering careers. The evaluation showed that the problem-based format of the course was well suited to teaching students how to use the scientific method to investigate and uncover the fundamental biological design rules that govern the field of biofabrication. We show that this approach is an efficient and effective method of translating emergent scientific principles from the lab bench to the classroom and training the next generation of biological and biomedical engineers for careers as researchers and industry practicians.
A hybrid clustering and classification approach for predicting crash injury severity on rural roads.
Hasheminejad, Seyed Hessam-Allah; Zahedi, Mohsen; Hasheminejad, Seyed Mohammad Hossein
2018-03-01
As a threat for transportation system, traffic crashes have a wide range of social consequences for governments. Traffic crashes are increasing in developing countries and Iran as a developing country is not immune from this risk. There are several researches in the literature to predict traffic crash severity based on artificial neural networks (ANNs), support vector machines and decision trees. This paper attempts to investigate the crash injury severity of rural roads by using a hybrid clustering and classification approach to compare the performance of classification algorithms before and after applying the clustering. In this paper, a novel rule-based genetic algorithm (GA) is proposed to predict crash injury severity, which is evaluated by performance criteria in comparison with classification algorithms like ANN. The results obtained from analysis of 13,673 crashes (5600 property damage, 778 fatal crashes, 4690 slight injuries and 2605 severe injuries) on rural roads in Tehran Province of Iran during 2011-2013 revealed that the proposed GA method outperforms other classification algorithms based on classification metrics like precision (86%), recall (88%) and accuracy (87%). Moreover, the proposed GA method has the highest level of interpretation, is easy to understand and provides feedback to analysts.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising.
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M.; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... Corporation 12 CFR Parts 324, 325 Regulatory Capital Rules: Advanced Approaches Risk-Based Capital Rule... 325 RIN 3064-AD97 Regulatory Capital Rules: Advanced Approaches Risk-Based Capital Rule; Market Risk... the agencies' current capital rules. In this NPR (Advanced Approaches and Market Risk NPR) the...
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
The Effects of Limited Intent Information Availability on Self-Separation in Mixed Operations
NASA Technical Reports Server (NTRS)
Lewis, Timothy A.; Phojanamongkolkij, Nipa; Wing, David J.
2012-01-01
This paper presents the results of a computer simulation of the NASA Autonomous Flight Rules (AFR) concept for airborne self-separation in airspace shared with conventional Instrument Flight Rules (IFR) traffic. This study was designed to determine the impact of varying levels of intent information from IFR aircraft on the performance of AFR conflict detection and resolution. The study used Automatic Dependent Surveillance-Broadcast (ADS-B) to supply IFR intent, but other methods such as an uplink from a ground-based System Wide Information Management (SWIM) network could alternatively supply this information. The independent variables of the study consist of the number of ADS-B trajectory change reports broadcast by IFR aircraft and the time interval between those reports. The conflict detection and resolution metrics include: the number of conflicts and losses of separation, the average conflict warning time, and the amount of time spent in strategic vs. tactical flight modes (i.e., whether the autoflight system was decoupled from the planned route in the Flight Management System in order to respond to a short-notice traffic conflict). The results show a measurable benefit of broadcasting IFR intent vs. relying on state-only broadcasts. The results of this study will inform ongoing separation assurance research and FAA NextGen design decisions for the sharing of trajectory intent information in the National Airspace System.
NASA Astrophysics Data System (ADS)
Malloy, Matt
2013-09-01
A comprehensive survey was sent to merchant and captive mask shops to gather information about the mask industry as an objective assessment of its overall condition. 2013 marks the 12th consecutive year for this process. Historical topics including general mask profile, mask processing, data and write time, yield and yield loss, delivery times, maintenance, and returns were included and new topics were added. Within each category are multiple questions that result in a detailed profile of both the business and technical status of the mask industry. While each year's survey includes minor updates based on feedback from past years and the need to collect additional data on key topics, the bulk of the survey and reporting structure have remained relatively constant. A series of improvements is being phased in beginning in 2013 to add value to a wider audience, while at the same time retaining the historical content required for trend analyses of the traditional metrics. Additions in 2013 include topics such as top challenges, future concerns, and additional details in key aspects of mask masking, such as the number of masks per mask set per ground rule, minimum mask resolution shipped, and yield by ground rule. These expansions beyond the historical topics are aimed at identifying common issues, gaps, and needs. They will also provide a better understanding of real-life mask requirements and capabilities for comparison to the International Technology Roadmap for Semiconductors (ITRS).
Inferring the rules of social interaction in migrating caribou.
Torney, Colin J; Lamont, Myles; Debell, Leon; Angohiatok, Ryan J; Leclerc, Lisa-Marie; Berdahl, Andrew M
2018-05-19
Social interactions are a significant factor that influence the decision-making of species ranging from humans to bacteria. In the context of animal migration, social interactions may lead to improved decision-making, greater ability to respond to environmental cues, and the cultural transmission of optimal routes. Despite their significance, the precise nature of social interactions in migrating species remains largely unknown. Here we deploy unmanned aerial systems to collect aerial footage of caribou as they undertake their migration from Victoria Island to mainland Canada. Through a Bayesian analysis of trajectories we reveal the fine-scale interaction rules of migrating caribou and show they are attracted to one another and copy directional choices of neighbours, but do not interact through clearly defined metric or topological interaction ranges. By explicitly considering the role of social information on movement decisions we construct a map of near neighbour influence that quantifies the nature of information flow in these herds. These results will inform more realistic, mechanism-based models of migration in caribou and other social ungulates, leading to better predictions of spatial use patterns and responses to changing environmental conditions. Moreover, we anticipate that the protocol we developed here will be broadly applicable to study social behaviour in a wide range of migratory and non-migratory taxa.This article is part of the theme issue 'Collective movement ecology'. © 2018 The Authors.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees.
Deepak, Akshay; Fernández-Baca, David
2014-01-01
A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.
Newton, Amanda S; Soleimani, Amir; Kirkland, Scott W; Gokiert, Rebecca J
2017-05-01
Specialized instruments to screen and diagnose mental health problems in children and adolescents are not yet standard components of clinical assessments in emergency departments (EDs). We conducted a systematic review to investigate the psychometric properties, accuracy, and performance metrics of instruments used in the ED to identify pediatric mental health and substance use problems. We searched seven electronic databases and the gray literature for psychometric validation studies, diagnostic studies, and cohort studies that assessed any instrument to screen for or diagnose mental illness, emotional or behavioral problems, or substance use disorders. Studies had to include children and adolescents with mental health presentations or positive screens for substance use. Two reviewers independently screened studies for relevance and quality. Diagnostic study quality was assessed with the four QUADAS-2 domains. Psychometric study quality was assessed with published criteria for instrument reliability, validity, and usability. We present a descriptive analysis of the reported psychometric properties and diagnostic performance of instruments for each study. Of the 4,832 references screened, 14 met inclusion criteria. Included studies evaluate 18 instruments for identifying suicide risk (six studies), alcohol use disorders (six studies), mood disorders (one study), and ED decision making (need for assessment, admission; one study). Nine studies include a psychometric focus but quality varies, with no studies fully meeting criteria for reliability, validity, and usability. Seven studies examine diagnostic performance of an instrument, but no study has a low risk of bias for all QUADAS-2 domains. The HEADS-ED instrument has good inter-rater reliability (r = 0.785) for identifying general mental health problems and modest evidence for ruling in patients requiring hospital admission (positive likelihood ratio [LR+] = 6.30). Internal consistency (reliability) varies for instruments to screen for suicide risk (α = 0.46-0.97), and no instruments have both high sensitivity and high specificity. The Ask Suicide-Screening Questions (ASQ) is highly sensitive (98%) and has strong evidence for ruling out risk (negative likelihood ratio [LR-] = 0.04). Among screening instruments for alcohol use disorders, internal consistency is high for the consumption subscale of the Alcohol Use Disorders Identification Test (α = 0.83-0.88) and the Adolescent Drinking Index (α = 0.92). Both instruments also had sound internal validity. Diagnostically, a two-item instrument based on DSM-IV criteria is the most accurate in identifying patients with a disorder (area under the curve = 0.89) and has modest evidence for ruling in and out risk (LR+ = 8.80, LR- = 0.13). From available evidence, we recommend that ED clinicians use 1) the HEADS-ED to rule in ED admission among pediatric patients with visits for mental health care, 2) the ASQ to rule out suicide risk among pediatric patients with any visit type, and 3) the DSM-IV two-item instrument to rule in/rule out alcohol use disorders among pediatric patients currently using alcohol. These instruments require minimal to no training or time commitment. We also recommend that clinicians become familiar with each instrument's psychometric properties to understand the quality of the evidence base. In this review, however, we identify methodologic limitations in the evidence base. To develop a robust evidence base, additional research is necessary. © 2017 by the Society for Academic Emergency Medicine.
Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules
NASA Astrophysics Data System (ADS)
Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.
Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.
Research on cardiovascular disease prediction based on distance metric learning
NASA Astrophysics Data System (ADS)
Ni, Zhuang; Liu, Kui; Kang, Guixia
2018-04-01
Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.
Some New Sets of Sequences of Fuzzy Numbers with Respect to the Partial Metric
Ozluk, Muharrem
2015-01-01
In this paper, we essentially deal with Köthe-Toeplitz duals of fuzzy level sets defined using a partial metric. Since the utilization of Zadeh's extension principle is quite difficult in practice, we prefer the idea of level sets in order to construct some classical notions. In this paper, we present the sets of bounded, convergent, and null series and the set of sequences of bounded variation of fuzzy level sets, based on the partial metric. We examine the relationships between these sets and their classical forms and give some properties including definitions, propositions, and various kinds of partial metric spaces of fuzzy level sets. Furthermore, we study some of their properties like completeness and duality. Finally, we obtain the Köthe-Toeplitz duals of fuzzy level sets with respect to the partial metric based on a partial ordering. PMID:25695102
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models
NASA Astrophysics Data System (ADS)
Wong, K.; Ellul, C.
2016-10-01
Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.
New exposure-based metric approach for evaluating O3 risk to North American aspen forests
K.E. Percy; M. Nosal; W. Heilman; T. Dann; J. Sober; A.H. Legge; D.F. Karnosky
2007-01-01
The United States and Canada currently use exposure-based metrics to protect vegetation from O3. Using 5 years (1999-2003) of co-measured O3, meteorology and growth response, we have developed exposure-based regression models that predict Populus tremuloides growth change within the North American ambient...
Karakolis, Thomas; Bhan, Shivam; Crotin, Ryan L
2013-08-01
In Major League Baseball (MLB), games pitched, total innings pitched, total pitches thrown, innings pitched per game, and pitches thrown per game are used to measure cumulative work. Often, pitchers are allocated limits, based on pitches thrown per game and total innings pitched in a season, in an attempt to prevent future injuries. To date, the efficacy in predicting injuries from these cumulative work metrics remains in question. It was hypothesized that the cumulative work metrics would be a significant predictor for future injury in MLB pitchers. Correlations between cumulative work for pitchers during 2002-07 and injury days in the following seasons were examined using regression analyses to test this hypothesis. Each metric was then "binned" into smaller cohorts to examine trends in the associated risk of injury for each cohort. During the study time period, 27% of pitchers were injured after a season in which they pitched. Although some interesting trends were noticed during the binning process, based on the regression analyses, it was found that no cumulative work metric was a significant predictor for future injury. It was concluded that management of a pitcher's playing schedule based on these cumulative work metrics alone could not be an effective means of preventing injury. These findings indicate that an integrated approach to injury prevention is required. This approach will likely involve advanced cumulative work metrics and biomechanical assessment.
On the use of hidden Markov models for gaze pattern modeling
NASA Astrophysics Data System (ADS)
Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph
2016-05-01
Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.
Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women
Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.
2015-01-01
Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007
Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness
NASA Technical Reports Server (NTRS)
Staats, Matt; Whalen, Michael W.; Heindahl, Mats P. E.; Rajan, Ajitha
2010-01-01
In black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a black-box test suite with respect to a set of requirements formalized as Linear Temporal Logic (LTL) properties. In this report, we evaluate the effectiveness of these coverage metrics with respect to fault finding. Specifically, we conduct an empirical study to investigate two questions: (1) do test suites satisfying a requirements coverage metric provide better fault finding than randomly generated test suites of approximately the same size?, and (2) do test suites satisfying a more rigorous requirements coverage metric provide better fault finding than test suites satisfying a less rigorous requirements coverage metric? Our results indicate (1) only one coverage metric proposed -- Unique First Cause (UFC) coverage -- is sufficiently rigorous to ensure test suites satisfying the metric outperform randomly generated test suites of similar size and (2) that test suites satisfying more rigorous coverage metrics provide better fault finding than test suites satisfying less rigorous coverage metrics.
A comparison of metrics to evaluate the effects of hydro-facility passage stressors on fish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colotelo, Alison H.; Goldman, Amy E.; Wagner, Katie A.
Hydropower is the most common form of renewable energy, and countries worldwide are considering expanding hydropower to new areas. One of the challenges of hydropower deployment is mitigation of the environmental impacts including water quality, habitat alterations, and ecosystem connectivity. For fish species that inhabit river systems with hydropower facilities, passage through the facility to access spawning and rearing habitats can be particularly challenging. Fish moving downstream through a hydro-facility can be exposed to a number of stressors (e.g., rapid decompression, shear forces, blade strike and collision, and turbulence), which can all affect fish survival in direct and indirect ways.more » Many studies have investigated the effects of hydro-turbine passage on fish; however, the comparability among studies is limited by variation in the metrics and biological endpoints used. Future studies investigating the effects of hydro-turbine passage should focus on using metrics and endpoints that are easily comparable. This review summarizes four categories of metrics that are used in fisheries research and have application to hydro-turbine passage (i.e., mortality, injury, molecular metrics, behavior) and evaluates them based on several criteria (i.e., resources needed, invasiveness, comparability among stressors and species, and diagnostic properties). Additionally, these comparisons are put into context of study setting (i.e., laboratory vs. field). Overall, injury and molecular metrics are ideal for studies in which there is a need to understand the mechanisms of effect, whereas behavior and mortality metrics provide information on the whole body response of the fish. The study setting strongly influences the comparability among studies. In laboratory-based studies, stressors can be controlled by both type and magnitude, allowing for easy comparisons among studies. In contrast, field studies expose fish to realistic passage environments but the comparability is limited. Based on these results, future studies, whether lab or field-based, should focus on metrics that relate to mortality for ease of comparison.« less
A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Watson, Andrew
2015-01-01
We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-10-30
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-01-01
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984
Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R
2008-09-01
Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.
Automatic evidence quality prediction to support evidence-based decision making.
Sarker, Abeed; Mollá, Diego; Paris, Cécile
2015-06-01
Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance. Our overall classification approach and evaluation technique are also highly portable and can be used for various evidence grading scales. Copyright © 2015 Elsevier B.V. All rights reserved.
2014-01-01
Background Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. Results We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Conclusions Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation. PMID:25350128
Zhang, Wenchao; Zhao, Patrick X
2014-01-01
Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation.
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
An Evaluation of the IntelliMetric[SM] Essay Scoring System
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine
2006-01-01
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
International Standards. U.S. Metric Study Report.
ERIC Educational Resources Information Center
Huntoon, Robert D.; And Others
In this first interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a series of conclusions and recommendations, based upon a national survey of the role of SI (System's International) units in international trade and other areas of foreign relations, includes the following…
Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals
ERIC Educational Resources Information Center
Perera, Upeksha; Wijewickrema, Manjula
2018-01-01
Ranking of scholarly journals is important to many parties. Studying the relationships among various ranking metrics is key to understanding the significance of one metric based on another. This research investigates the relationship among four major journal-ranking indicators: the impact factor (IF), the Eigenfactor score (ES), the "h."…
Developing image processing meta-algorithms with data mining of multiple metrics.
Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.
A comparison of methods for monitoring photon beam energy constancy.
Gao, Song; Balter, Peter A; Rose, Mark; Simon, William E
2016-11-08
In extension of a previous study, we compared several photon beam energy metrics to determine which was the most sensitive to energy change; in addition to those, we accounted for both the sensitivity of each metric and the uncertainty in determining that metric for both traditional flattening filter (FF) beams (4, 6, 8, and 10 MV) and for flattening filter-free (FFF) beams (6 and 10 MV) on a Varian TrueBeam. We examined changes in these energy metrics when photon energies were changed to ± 5% and ± 10% from their nominal energies: 1) an attenuation-based metric (the percent depth dose at 10 cm depth, PDD(10)) and, 2) profile-based metrics, including flatness (Flat) and off-axis ratios (OARs) measured on the orthogonal axes or on the diagonals (diagonal normalized flatness, FDN). Profile-based metrics were measured near dmax and also near 10 cm depth in water (using a 3D scanner) and with ioniza-tion chamber array (ICA). PDD(10) was measured only in water. Changes in PDD, OAR, and FDN were nearly linear to the changes in the bend magnet current (BMI) over the range from -10% to +10% for both FF and FFF beams: a ± 10% change in energy resulted in a ± 1.5% change in PDD(10) for both FF and FFF beams, and changes in OAR and FDN were > 3.0% for FF beams and > 2.2% for FFF beams. The uncertainty in determining PDD(10) was estimated to be 0.15% and that for OAR and FDN about 0.07%. This resulted in minimally detectable changes in energy of 2.5% for PDD(10) and 0.5% for OAR and FDN. We found that the OAR- or FDN- based metrics were the best for detecting energy changes for both FF and FFF beams. The ability of the OAR-based metrics determined with a water scanner to detect energy changes was equivalent to that using an ionization chamber array. We recommend that OAR be measured either on the orthogonal axes or the diagonals, using an ionization chamber array near the depth of maximum dose, as a sensitive and efficient way to confirm stability of photon beam energy. © 2016 The Authors.
Verification of Ensemble Forecasts for the New York City Operations Support Tool
NASA Astrophysics Data System (ADS)
Day, G.; Schaake, J. C.; Thiemann, M.; Draijer, S.; Wang, L.
2012-12-01
The New York City water supply system operated by the Department of Environmental Protection (DEP) serves nine million people. It covers 2,000 square miles of portions of the Catskill, Delaware, and Croton watersheds, and it includes nineteen reservoirs and three controlled lakes. DEP is developing an Operations Support Tool (OST) to support its water supply operations and planning activities. OST includes historical and real-time data, a model of the water supply system complete with operating rules, and lake water quality models developed to evaluate alternatives for managing turbidity in the New York City Catskill reservoirs. OST will enable DEP to manage turbidity in its unfiltered system while satisfying its primary objective of meeting the City's water supply needs, in addition to considering secondary objectives of maintaining ecological flows, supporting fishery and recreation releases, and mitigating downstream flood peaks. The current version of OST relies on statistical forecasts of flows in the system based on recent observed flows. To improve short-term decision making, plans are being made to transition to National Weather Service (NWS) ensemble forecasts based on hydrologic models that account for short-term weather forecast skill, longer-term climate information, as well as the hydrologic state of the watersheds and recent observed flows. To ensure that the ensemble forecasts are unbiased and that the ensemble spread reflects the actual uncertainty of the forecasts, a statistical model has been developed to post-process the NWS ensemble forecasts to account for hydrologic model error as well as any inherent bias and uncertainty in initial model states, meteorological data and forecasts. The post-processor is designed to produce adjusted ensemble forecasts that are consistent with the DEP historical flow sequences that were used to develop the system operating rules. A set of historical hindcasts that is representative of the real-time ensemble forecasts is needed to verify that the post-processed forecasts are unbiased, statistically reliable, and preserve the skill inherent in the "raw" NWS ensemble forecasts. A verification procedure and set of metrics will be presented that provide an objective assessment of ensemble forecasts. The procedure will be applied to both raw ensemble hindcasts and to post-processed ensemble hindcasts. The verification metrics will be used to validate proper functioning of the post-processor and to provide a benchmark for comparison of different types of forecasts. For example, current NWS ensemble forecasts are based on climatology, using each historical year to generate a forecast trace. The NWS Hydrologic Ensemble Forecast System (HEFS) under development will utilize output from both the National Oceanic Atmospheric Administration (NOAA) Global Ensemble Forecast System (GEFS) and the Climate Forecast System (CFS). Incorporating short-term meteorological forecasts and longer-term climate forecast information should provide sharper, more accurate forecasts. Hindcasts from HEFS will enable New York City to generate verification results to validate the new forecasts and further fine-tune system operating rules. Project verification results will be presented for different watersheds across a range of seasons, lead times, and flow levels to assess the quality of the current ensemble forecasts.
Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.
Local adjacency metric dimension of sun graph and stacked book graph
NASA Astrophysics Data System (ADS)
Yulisda Badri, Alifiah; Darmaji
2018-03-01
A graph is a mathematical system consisting of a non-empty set of nodes and a set of empty sides. One of the topics to be studied in graph theory is the metric dimension. Application in the metric dimension is the navigation robot system on a path. Robot moves from one vertex to another vertex in the field by minimizing the errors that occur in translating the instructions (code) obtained from the vertices of that location. To move the robot must give different instructions (code). In order for the robot to move efficiently, the robot must be fast to translate the code of the nodes of the location it passes. so that the location vertex has a minimum distance. However, if the robot must move with the vertex location on a very large field, so the robot can not detect because the distance is too far.[6] In this case, the robot can determine its position by utilizing location vertices based on adjacency. The problem is to find the minimum cardinality of the required location vertex, and where to put, so that the robot can determine its location. The solution to this problem is the dimension of adjacency metric and adjacency metric bases. Rodrguez-Velzquez and Fernau combine the adjacency metric dimensions with local metric dimensions, thus becoming the local adjacency metric dimension. In the local adjacency metric dimension each vertex in the graph may have the same adjacency representation as the terms of the vertices. To obtain the local metric dimension of values in the graph of the Sun and the stacked book graph is used the construction method by considering the representation of each adjacent vertex of the graph.
Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192
Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.
2011-01-01
Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, V; Labby, Z; Culberson, W
Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control pointmore » in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar modulation characteristics. Measurements and simulations will better elucidate potential plan-class specific dosimetry correction factors.« less
Development of a multimetric index for assessing the biological condition of the Ohio River
Emery, E.B.; Simon, T.P.; McCormick, F.H.; Angermeier, P.L.; Deshon, J.E.; Yoder, C.O.; Sanders, R.E.; Pearson, W.D.; Hickman, G.D.; Reash, R.J.; Thomas, J.A.
2003-01-01
The use of fish communities to assess environmental quality is common for streams, but a standard methodology for large rivers is as yet largely undeveloped. We developed an index to assess the condition of fish assemblages along 1,580 km of the Ohio River. Representative samples of fish assemblages were collected from 709 Ohio River reaches, including 318 "least-impacted" sites, from 1991 to 2001 by means of standardized nighttime boat-electrofishing techniques. We evaluated 55 candidate metrics based on attributes of fish assemblage structure and function to derive a multimetric index of river health. We examined the spatial (by river kilometer) and temporal variability of these metrics and assessed their responsiveness to anthropogenic disturbances, namely, effluents, turbidity, and highly embedded substrates. The resulting Ohio River Fish Index (ORFIn) comprises 13 metrics selected because they responded predictably to measures of human disturbance or reflected desirable features of the Ohio River. We retained two metrics (the number of intolerant species and the number of sucker species [family Catostomidae]) from Karr's original index of biotic integrity. Six metrics were modified from indices developed for the upper Ohio River (the number of native species; number of great-river species; number of centrarchid species; the number of deformities, eroded fins and barbels, lesions, and tumors; percent individuals as simple lithophils; and percent individuals as tolerant species). We also incorporated three trophic metrics (the percent of individuals as detritivores, invertivores, and piscivores), one metric based on catch per unit effort, and one metric based on the percent of individuals as nonindigenous fish species. The ORFIn declined significantly where anthropogenic effects on substrate and water quality were prevalent and was significantly lower in the first 500 m below point source discharges than at least-impacted sites nearby. Although additional research on the temporal stability of the metrics and index will likely enhance the reliability of the ORFIn, its incorporation into Ohio River assessments still represents an improvement over current physicochemical protocols.
Miller, Vonda H; Jansen, Ben H
2008-12-01
Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.
Revisiting the Procedures for the Vector Data Quality Assurance in Practice
NASA Astrophysics Data System (ADS)
Erdoğan, M.; Torun, A.; Boyacı, D.
2012-07-01
Immense use of topographical data in spatial data visualization, business GIS (Geographic Information Systems) solutions and applications, mobile and location-based services forced the topo-data providers to create standard, up-to-date and complete data sets in a sustainable frame. Data quality has been studied and researched for more than two decades. There have been un-countable numbers of references on its semantics, its conceptual logical and representations and many applications on spatial databases and GIS. However, there is a gap between research and practice in the sense of spatial data quality which increases the costs and decreases the efficiency of data production. Spatial data quality is well-known by academia and industry but usually in different context. The research on spatial data quality stated several issues having practical use such as descriptive information, metadata, fulfillment of spatial relationships among data, integrity measures, geometric constraints etc. The industry and data producers realize them in three stages; pre-, co- and post data capturing. The pre-data capturing stage covers semantic modelling, data definition, cataloguing, modelling, data dictionary and schema creation processes. The co-data capturing stage covers general rules of spatial relationships, data and model specific rules such as topologic and model building relationships, geometric threshold, data extraction guidelines, object-object, object-belonging class, object-non-belonging class, class-class relationships to be taken into account during data capturing. And post-data capturing stage covers specified QC (quality check) benchmarks and checking compliance to general and specific rules. The vector data quality criteria are different from the views of producers and users. But these criteria are generally driven by the needs, expectations and feedbacks of the users. This paper presents a practical method which closes the gap between theory and practice. Development of spatial data quality concepts into developments and application requires existence of conceptual, logical and most importantly physical existence of data model, rules and knowledge of realization in a form of geo-spatial data. The applicable metrics and thresholds are determined on this concrete base. This study discusses application of geo-spatial data quality issues and QA (quality assurance) and QC procedures in the topographic data production. Firstly we introduce MGCP (Multinational Geospatial Co-production Program) data profile of NATO (North Atlantic Treaty Organization) DFDD (DGIWG Feature Data Dictionary), the requirements of data owner, the view of data producers for both data capturing and QC and finally QA to fulfil user needs. Then, our practical and new approach which divides the quality into three phases is introduced. Finally, implementation of our approach to accomplish metrics, measures and thresholds of quality definitions is discussed. In this paper, especially geometry and semantics quality and quality control procedures that can be performed by the producers are discussed. Some applicable best-practices that we experienced on techniques of quality control, defining regulations that define the objectives and data production procedures are given in the final remarks. These quality control procedures should include the visual checks over the source data, captured vector data and printouts, some automatic checks that can be performed by software and some semi-automatic checks by the interaction with quality control personnel. Finally, these quality control procedures should ensure the geometric, semantic, attribution and metadata quality of vector data.
Spectral sum rules and magneto-roton as emergent graviton in fractional quantum Hall effect
Golkar, Siavash; Nguyen, Dung X.; Son, Dam T.
2016-01-05
Here, we consider gapped fractional quantum Hall states on the lowest Landau level when the Coulomb energy is much smaller than the cyclotron energy. We introduce two spectral densities, ρ T(ω) andmore » $$\\bar{p}$$ T(ω), which are proportional to the probabilities of absorption of circularly polarized gravitons by the quantum Hall system. We prove three sum rules relating these spectral densities with the shift S, the q 4 coefficient of the static structure factor S 4, and the high-frequency shear modulus of the ground state μ ∞, which is precisely defined. We confirm an inequality, first suggested by Haldane, that S 4 is bounded from below by |S–1|/8. The Laughlin wavefunction saturates this bound, which we argue to imply that systems with ground state wavefunctions close to Laughlin’s absorb gravitons of predominantly one circular polarization. We consider a nonlinear model where the sum rules are saturated by a single magneto-roton mode. In this model, the magneto-roton arises from the mixing between oscillations of an internal metric and the hydrodynamic motion. Implications for experiments are briefly discussed.« less
A software quality model and metrics for risk assessment
NASA Technical Reports Server (NTRS)
Hyatt, L.; Rosenberg, L.
1996-01-01
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
On Decision-Making Among Multiple Rule-Bases in Fuzzy Control Systems
NASA Technical Reports Server (NTRS)
Tunstel, Edward; Jamshidi, Mo
1997-01-01
Intelligent control of complex multi-variable systems can be a challenge for single fuzzy rule-based controllers. This class of problems cam often be managed with less difficulty by distributing intelligent decision-making amongst a collection of rule-bases. Such an approach requires that a mechanism be chosen to ensure goal-oriented interaction between the multiple rule-bases. In this paper, a hierarchical rule-based approach is described. Decision-making mechanisms based on generalized concepts from single-rule-based fuzzy control are described. Finally, the effects of different aggregation operators on multi-rule-base decision-making are examined in a navigation control problem for mobile robots.
Metrics for Offline Evaluation of Prognostic Performance
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2010-01-01
Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
Modeling Mediterranean forest structure using airborne laser scanning data
NASA Astrophysics Data System (ADS)
Bottalico, Francesca; Chirici, Gherardo; Giannini, Raffaello; Mele, Salvatore; Mura, Matteo; Puxeddu, Michele; McRoberts, Ronald E.; Valbuena, Ruben; Travaglini, Davide
2017-05-01
The conservation of biological diversity is recognized as a fundamental component of sustainable development, and forests contribute greatly to its preservation. Structural complexity increases the potential biological diversity of a forest by creating multiple niches that can host a wide variety of species. To facilitate greater understanding of the contributions of forest structure to forest biological diversity, we modeled relationships between 14 forest structure variables and airborne laser scanning (ALS) data for two Italian study areas representing two common Mediterranean forests, conifer plantations and coppice oaks subjected to irregular intervals of unplanned and non-standard silvicultural interventions. The objectives were twofold: (i) to compare model prediction accuracies when using two types of ALS metrics, echo-based metrics and canopy height model (CHM)-based metrics, and (ii) to construct inferences in the form of confidence intervals for large area structural complexity parameters. Our results showed that the effects of the two study areas on accuracies were greater than the effects of the two types of ALS metrics. In particular, accuracies were less for the more complex study area in terms of species composition and forest structure. However, accuracies achieved using the echo-based metrics were only slightly greater than when using the CHM-based metrics, thus demonstrating that both options yield reliable and comparable results. Accuracies were greatest for dominant height (Hd) (R2 = 0.91; RMSE% = 8.2%) and mean height weighted by basal area (R2 = 0.83; RMSE% = 10.5%) when using the echo-based metrics, 99th percentile of the echo height distribution and interquantile distance. For the forested area, the generalized regression (GREG) estimate of mean Hd was similar to the simple random sampling (SRS) estimate, 15.5 m for GREG and 16.2 m SRS. Further, the GREG estimator with standard error of 0.10 m was considerable more precise than the SRS estimator with standard error of 0.69 m.
NASA Astrophysics Data System (ADS)
McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.
2018-02-01
Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.
Anderson, Donald D; Kilburg, Anthony T; Thomas, Thaddeus P; Marsh, J Lawrence
2016-01-01
Post-traumatic osteoarthritis (PTOA) is common after intra-articular fractures of the tibial plafond. An objective CT-based measure of fracture severity was previously found to reliably predict whether PTOA developed following surgical treatment of such fractures. However, the extended time required obtaining the fracture energy metric and its reliance upon an intact contralateral limb CT limited its clinical applicability. The objective of this study was to establish an expedited fracture severity metric that provided comparable PTOA predictive ability without the prior limitations. An expedited fracture severity metric was computed from the CT scans of 30 tibial plafond fractures using textural analysis to quantify disorder in CT images. The expedited method utilized an intact surrogate model to enable severity assessment without requiring a contralateral limb CT. Agreement between the expedited fracture severity metric and the Kellgren-Lawrence (KL) radiographic OA score at two-year follow-up was assessed using concordance. The ability of the metric to differentiate between patients that did or did not develop PTOA was assessed using the Wilcoxon Ranked Sum test. The expedited severity metric agreed well (75.2% concordance) with the KL scores. The initial fracture severity of cases that developed PTOA differed significantly (p = 0.004) from those that did not. Receiver operating characteristic analysis showed that the expedited severity metric could accurately predict PTOA outcome in 80% of the cases. The time required to obtain the expedited severity metric averaged 14.9 minutes/ case, and the metric was obtained without using an intact contralateral CT. The expedited CT-based methods for fracture severity assessment present a solution to issues limiting the utility of prior methods. In a relatively short amount of time, the expedited methodology provided a severity score capable of predicting PTOA risk, without needing to have the intact contralateral limb included in the CT scan. The described methods provide surgeons an objective, quantitative representation of the severity of a fracture. Obtained prior to the surgery, it provides a reasonable alternative to current subjective classification systems. The expedited severity metric offers surgeons an objective means for factoring severity of joint insult into treatment decision-making.
New non-naturally reductive Einstein metrics on exceptional simple Lie groups
NASA Astrophysics Data System (ADS)
Chen, Huibin; Chen, Zhiqi; Deng, Shaoqiang
2018-01-01
In this article, we construct several non-naturally reductive Einstein metrics on exceptional simple Lie groups, which are found through the decomposition arising from generalized Wallach spaces. Using the decomposition corresponding to the two involutions, we calculate the non-zero coefficients in the formulas of the components of Ricci tensor with respect to the given metrics. The Einstein metrics are obtained as solutions of a system of polynomial equations, which we manipulate by symbolic computations using Gröbner bases. In particular, we discuss the concrete numbers of non-naturally reductive Einstein metrics for each case up to isometry and homothety.
Distance Metric Learning via Iterated Support Vector Machines.
Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei
2017-07-11
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.
On Information Metrics for Spatial Coding.
Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L
2018-04-01
The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Developing a Metrics-Based Online Strategy for Libraries
ERIC Educational Resources Information Center
Pagano, Joe
2009-01-01
Purpose: The purpose of this paper is to provide an introduction to the various web metrics tools that are available, and to indicate how these might be used in libraries. Design/methodology/approach: The paper describes ways in which web metrics can be used to inform strategic decision making in libraries. Findings: A framework of possible web…
High resolution metric imaging payload
NASA Astrophysics Data System (ADS)
Delclaud, Y.
2017-11-01
Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendell, Mark J.; Lei, Quanhong; Cozen, Myrna O.
2003-10-01
Metrics of culturable airborne microorganisms for either total organisms or suspected harmful subgroups have generally not been associated with symptoms among building occupants. However, the visible presence of moisture damage or mold in residences and other buildings has consistently been associated with respiratory symptoms and other health effects. This relationship is presumably caused by adverse but uncharacterized exposures to moisture-related microbiological growth. In order to assess this hypothesis, we studied relationships in U.S. office buildings between the prevalence of respiratory and irritant symptoms, the concentrations of airborne microorganisms that require moist surfaces on which to grow, and the presence ofmore » visible water damage. For these analyses we used data on buildings, indoor environments, and occupants collected from a representative sample of 100 U.S. office buildings in the U.S. Environmental Protection Agency's Building Assessment Survey and Evaluation (EPA BASE) study. We created 19 alternate metrics, using scales ranging from 3-10 units, that summarized the concentrations of airborne moisture-indicating microorganisms (AMIMOs) as indicators of moisture in buildings. Two were constructed to resemble a metric previously reported to be associated with lung function changes in building occupants; the others were based on another metric from the same group of Finnish researchers, concentration cutpoints from other studies, and professional judgment. We assessed three types of associations: between AMIMO metrics and symptoms in office workers, between evidence of water damage and symptoms, and between water damage and AMIMO metrics. We estimated (as odds ratios (ORs) with 95% confidence intervals) the unadjusted and adjusted associations between the 19 metrics and two types of weekly, work-related symptoms--lower respiratory and mucous membrane--using logistic regression models. Analyses used the original AMIMO metrics and were repeated with simplified dichotomized metrics. The multivariate models adjusted for other potential confounding variables associated with respondents, occupied spaces, buildings, or ventilation systems. Models excluded covariates for moisture-related risks hypothesized to increase AMIMO levels. We also estimated the association of water damage (using variables for specific locations in the study space or building, or summary variables) with the two symptom outcomes. Finally, using selected AMIMO metrics as outcomes, we constructed logistic regression models with observations at the building level to estimate unadjusted and adjusted associations of evident water damage with AMIMO metrics. All original AMIMO metrics showed little overall pattern of unadjusted or adjusted association with either symptom outcome. The 3-category metric resembling that previously used by others, which of all constructed metrics had the largest number of buildings in its top category, was not associated with symptoms in these buildings. However, most metrics with few buildings in their highest category showed increased risk for both symptoms in that category, especially metrics using cutpoints of >100 but <500 colony-forming units (CFU)/m{sup 3} for concentration of total culturable fungi. With AMIMO metrics dichotomized to compare the highest category with all lower categories combined, four metrics had unadjusted ORs between 1.4 and 1.6 for both symptom outcomes. The same four metrics had adjusted ORs of 1.7-2.1 for both symptom outcomes. In models of water damage and symptoms, several specific locations of past water damage had significant associations with outcomes, with ORs ranging from 1.4-1.6. In bivariate models of water damage and selected AMIMO metrics, a number of specific types of water damage and several summary variables for water damage were very strongly associated with AMIMO metrics (significant ORs ranging above 15). Multivariate modeling with the dichotomous AMIMO metrics was not possible due to limited numbers of observations.« less
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
ChemicalTagger: A tool for semantic text-mining in chemistry
2011-01-01
Background The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. Results We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). Conclusions It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision. PMID:21575201
A Proposal for IoT Dynamic Routes Selection Based on Contextual Information.
Araújo, Harilton da Silva; Filho, Raimir Holanda; Rodrigues, Joel J P C; Rabelo, Ricardo de A L; Sousa, Natanael de C; Filho, José C C L S; Sobral, José V V
2018-01-26
The Internet of Things (IoT) is based on interconnection of intelligent and addressable devices, allowing their autonomy and proactive behavior with Internet connectivity. Data dissemination in IoT usually depends on the application and requires context-aware routing protocols that must include auto-configuration features (which adapt the behavior of the network at runtime, based on context information). This paper proposes an approach for IoT route selection using fuzzy logic in order to attain the requirements of specific applications. In this case, fuzzy logic is used to translate in math terms the imprecise information expressed by a set of linguistic rules. For this purpose, four Objective Functions (OFs) are proposed for the Routing Protocol for Low Power and Loss Networks (RPL); such OFs are dynamically selected based on context information. The aforementioned OFs are generated from the fusion of the following metrics: Expected Transmission Count (ETX), Number of Hops (NH) and Energy Consumed (EC). The experiments performed through simulation, associated with the statistical data analysis, conclude that this proposal provides high reliability by successfully delivering nearly 100% of data packets, low delay for data delivery and increase in QoS. In addition, an 30% improvement is attained in the network life time when using one of proposed objective function, keeping the devices alive for longer duration.
Edeani, Francis; Malik, Adeel; Kaul, Ajay
2017-03-01
The Chicago classification was based on metrics derived from studies in asymptomatic adult subjects. Our objectives were to characterize esophageal motility disorders in children and to determine whether the spectrum of manometric findings is similar between the pediatric and adult populations. Studies have suggested that the metrics utilized in manometric diagnosis depend on age, size, and manometric assembly. This would imply that a different set of metrics should be used for the pediatric population. There are no standardized and generally accepted metrics for use in the pediatric population, though there have been attempts to establish metrics specific to this population. Overall, we found that the distribution of esophageal motility disorders in children was like that described in adults using the Chicago classification. This analysis will serve as a prequel to follow-up studies exploring the individual metrics for variability among patients, with the objective of establishing novel metrics for the pediatric population.
Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics
Cunha, Alexandre; Toga, A. W.; Parker, D. Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748
Krieger, Jonathan D
2014-08-01
I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.
Geology of the Sklodowska Region, Lunar Farside. M.S. Thesis Final Report
NASA Technical Reports Server (NTRS)
Kauffman, J. D.
1974-01-01
Investigation of an area on the lunar farside has resulted in a geologic map, development of a regional stratigraphic sequence, and interpretation of surface materials. Apollo 15 metric photographs were used in conjunction with photogrammetric techniques to produce a base map to which geologic units were later added. Geologic units were first delineated on the metric photographs and then transferred to the base map. Materials were defined and described from selected Lunar Orbiter and Apollo 15 metric, panoramic, and Hasselblad photographs on the basis of distinctive morphologic characteristics.
Eckart frame vibration-rotation Hamiltonians: Contravariant metric tensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesonen, Janne, E-mail: janne.pesonen@helsinki.fi
2014-02-21
Eckart frame is a unique embedding in the theory of molecular vibrations and rotations. It is defined by the condition that the Coriolis coupling of the reference structure of the molecule is zero for every choice of the shape coordinates. It is far from trivial to set up Eckart kinetic energy operators (KEOs), when the shape of the molecule is described by curvilinear coordinates. In order to obtain the KEO, one needs to set up the corresponding contravariant metric tensor. Here, I derive explicitly the Eckart frame rotational measuring vectors. Their inner products with themselves give the rotational elements, andmore » their inner products with the vibrational measuring vectors (which, in the absence of constraints, are the mass-weighted gradients of the shape coordinates) give the Coriolis elements of the contravariant metric tensor. The vibrational elements are given as the inner products of the vibrational measuring vectors with themselves, and these elements do not depend on the choice of the body-frame. The present approach has the advantage that it does not depend on any particular choice of the shape coordinates, but it can be used in conjunction with all shape coordinates. Furthermore, it does not involve evaluation of covariant metric tensors, chain rules of derivation, or numerical differentiation, and it can be easily modified if there are constraints on the shape of the molecule. Both the planar and non-planar reference structures are accounted for. The present method is particular suitable for numerical work. Its computational implementation is outlined in an example, where I discuss how to evaluate vibration-rotation energies and eigenfunctions of a general N-atomic molecule, the shape of which is described by a set of local polyspherical coordinates.« less
Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim
2012-01-01
Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.
Testing, Requirements, and Metrics
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William
1998-01-01
The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Clearing margin system in the futures markets—Applying the value-at-risk model to Taiwanese data
NASA Astrophysics Data System (ADS)
Chiu, Chien-Liang; Chiang, Shu-Mei; Hung, Jui-Cheng; Chen, Yu-Lung
2006-07-01
This article sets out to investigate if the TAIFEX has adequate clearing margin adjustment system via unconditional coverage, conditional coverage test and mean relative scaled bias to assess the performance of three value-at-risk (VaR) models (i.e., the TAIFEX, RiskMetrics and GARCH-t). For the same model, original and absolute returns are compared to explore which can accurately capture the true risk. For the same return, daily and tiered adjustment methods are examined to evaluate which corresponds to risk best. The results indicate that the clearing margin adjustment of the TAIFEX cannot reflect true risks. The adjustment rules, including the use of absolute return and tiered adjustment of the clearing margin, have distorted VaR-based margin requirements. Besides, the results suggest that the TAIFEX should use original return to compute VaR and daily adjustment system to set clearing margin. This approach would improve the funds operation efficiency and the liquidity of the futures markets.
Jeagle: a JAVA Runtime Verification Tool
NASA Technical Reports Server (NTRS)
DAmorim, Marcelo; Havelund, Klaus
2005-01-01
We introduce the temporal logic Jeagle and its supporting tool for runtime verification of Java programs. A monitor for an Jeagle formula checks if a finite trace of program events satisfies the formula. Jeagle is a programming oriented extension of the rule-based powerful Eagle logic that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace. Jeagle extends Eagle with constructs for capturing parameterized program events such as method calls and method returns. Parameters can be the objects that methods are called upon, arguments to methods, and return values. Jeagle allows one to refer to these in formulas. The tool performs automated program instrumentation using AspectJ. We show the transformational semantics of Jeagle.
Deterministic Mean-Field Ensemble Kalman Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-10-01
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
NASA Astrophysics Data System (ADS)
Van Sundert, Kevin; Horemans, Joanna A.; Stendahl, Johan; Vicca, Sara
2018-06-01
The availability of nutrients is one of the factors that regulate terrestrial carbon cycling and modify ecosystem responses to environmental changes. Nonetheless, nutrient availability is often overlooked in climate-carbon cycle studies because it depends on the interplay of various soil factors that would ideally be comprised into metrics applicable at large spatial scales. Such metrics do not currently exist. Here, we use a Swedish forest inventory database that contains soil data and tree growth data for > 2500 forests across Sweden to (i) test which combination of soil factors best explains variation in tree growth, (ii) evaluate an existing metric of constraints on nutrient availability, and (iii) adjust this metric for boreal forest data. With (iii), we thus aimed to provide an adjustable nutrient metric, applicable for Sweden and with potential for elaboration to other regions. While taking into account confounding factors such as climate, N deposition, and soil oxygen availability, our analyses revealed that the soil organic carbon concentration (SOC) and the ratio of soil carbon to nitrogen (C : N) were the most important factors explaining variation in normalized
(climate-independent) productivity (mean annual volume increment - m3 ha-1 yr-1) across Sweden. Normalized forest productivity was significantly negatively related to the soil C : N ratio (R2 = 0.02-0.13), while SOC exhibited an empirical optimum (R2 = 0.05-0.15). For the metric, we started from a (yet unvalidated) metric for constraints on nutrient availability that was previously developed by the International Institute for Applied Systems Analysis (IIASA - Laxenburg, Austria) for evaluating potential productivity of arable land. This IIASA metric requires information on soil properties that are indicative of nutrient availability (SOC, soil texture, total exchangeable bases - TEB, and pH) and is based on theoretical considerations that are also generally valid for nonagricultural ecosystems. However, the IIASA metric was unrelated to normalized forest productivity across Sweden (R2 = 0.00-0.01) because the soil factors under consideration were not optimally implemented according to the Swedish data, and because the soil C : N ratio was not included. Using two methods (each one based on a different way of normalizing productivity for climate), we adjusted this metric by incorporating soil C : N and modifying the relationship between SOC and nutrient availability in view of the observed relationships across our database. In contrast to the IIASA metric, the adjusted metrics explained some variation in normalized productivity in the database (R2 = 0.03-0.21; depending on the applied method). A test for five manually selected local fertility gradients in our database revealed a significant and stronger relationship between the adjusted metrics and productivity for each of the gradients (R2 = 0.09-0.38). This study thus shows for the first time how nutrient availability metrics can be evaluated and adjusted for a particular ecosystem type, using a large-scale database.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Multidisciplinary life cycle metrics and tools for green buildings.
Helgeson, Jennifer F; Lippiatt, Barbara C
2009-07-01
Building sector stakeholders need compelling metrics, tools, data, and case studies to support major investments in sustainable technologies. Proponents of green building widely claim that buildings integrating sustainable technologies are cost effective, but often these claims are based on incomplete, anecdotal evidence that is difficult to reproduce and defend. The claims suffer from 2 main weaknesses: 1) buildings on which claims are based are not necessarily "green" in a science-based, life cycle assessment (LCA) sense and 2) measures of cost effectiveness often are not based on standard methods for measuring economic worth. Yet, the building industry demands compelling metrics to justify sustainable building designs. The problem is hard to solve because, until now, neither methods nor robust data supporting defensible business cases were available. The US National Institute of Standards and Technology (NIST) Building and Fire Research Laboratory is beginning to address these needs by developing metrics and tools for assessing the life cycle economic and environmental performance of buildings. Economic performance is measured with the use of standard life cycle costing methods. Environmental performance is measured by LCA methods that assess the "carbon footprint" of buildings, as well as 11 other sustainability metrics, including fossil fuel depletion, smog formation, water use, habitat alteration, indoor air quality, and effects on human health. Carbon efficiency ratios and other eco-efficiency metrics are established to yield science-based measures of the relative worth, or "business cases," for green buildings. Here, the approach is illustrated through a realistic building case study focused on different heating, ventilation, air conditioning technology energy efficiency. Additionally, the evolution of the Building for Environmental and Economic Sustainability multidisciplinary team and future plans in this area are described.
Feature Extraction of High-Dimensional Structures for Exploratory Analytics
2013-04-01
Comparison of Euclidean vs. geodesic distance. LDRs use metric based on the Euclidean distance between two points, while the NLDRs are based on...geodesic distance. An NLDR successfully unrolls the curved manifold, whereas an LDR fails. ...........................3 1 1. Introduction An...and classical metric multidimensional scaling, are a linear DR ( LDR ). An LDR is based on a linear combination of
PSQM-based RR and NR video quality metrics
NASA Astrophysics Data System (ADS)
Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu
2003-06-01
This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birchfield, Adam; Schweitzer, Eran; Athari, Mir
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.
In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.; ...
2015-10-09
In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less
Performance metrics for the assessment of satellite data products: an ocean color case study
Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy
2018-01-01
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...
2017-08-19
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Sigma Routing Metric for RPL Protocol.
Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian
2018-04-21
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
Can segmentation evaluation metric be used as an indicator of land cover classification accuracy?
NASA Astrophysics Data System (ADS)
Švab Lenarčič, Andreja; Đurić, Nataša; Čotar, Klemen; Ritlop, Klemen; Oštir, Krištof
2016-10-01
It is a broadly established belief that the segmentation result significantly affects subsequent image classification accuracy. However, the actual correlation between the two has never been evaluated. Such an evaluation would be of considerable importance for any attempts to automate the object-based classification process, as it would reduce the amount of user intervention required to fine-tune the segmentation parameters. We conducted an assessment of segmentation and classification by analyzing 100 different segmentation parameter combinations, 3 classifiers, 5 land cover classes, 20 segmentation evaluation metrics, and 7 classification accuracy measures. The reliability definition of segmentation evaluation metrics as indicators of land cover classification accuracy was based on the linear correlation between the two. All unsupervised metrics that are not based on number of segments have a very strong correlation with all classification measures and are therefore reliable as indicators of land cover classification accuracy. On the other hand, correlation at supervised metrics is dependent on so many factors that it cannot be trusted as a reliable classification quality indicator. Algorithms for land cover classification studied in this paper are widely used; therefore, presented results are applicable to a wider area.
Problem formulation, metrics, open government, and on-line collaboration
NASA Astrophysics Data System (ADS)
Ziegler, C. R.; Schofield, K.; Young, S.; Shaw, D.
2010-12-01
Problem formulation leading to effective environmental management, including synthesis and application of science by government agencies, may benefit from collaborative on-line environments. This is illustrated by two interconnected projects: 1) literature-based evidence tools that support causal assessment and problem formulation, and 2) development of output, outcome, and sustainability metrics for tracking environmental conditions. Specifically, peer-production mechanisms allow for global contribution to science-based causal evidence databases, and subsequent crowd-sourced development of causal networks supported by that evidence. In turn, science-based causal networks may inform problem formulation and selection of metrics or indicators to track environmental condition (or problem status). Selecting and developing metrics in a collaborative on-line environment may improve stakeholder buy-in, the explicit relevance of metrics to planning, and the ability to approach problem apportionment or accountability, and to define success or sustainability. Challenges include contribution governance, data-sharing incentives, linking on-line interfaces to data service providers, and the intersection of environmental science and social science. Degree of framework access and confidentiality may vary by group and/or individual, but may ultimately be geared at demonstrating connections between science and decision making and supporting a culture of open government, by fostering transparency, public engagement, and collaboration.
Sacchet, Matthew D.; Prasad, Gautam; Foland-Ross, Lara C.; Thompson, Paul M.; Gotlib, Ian H.
2015-01-01
Recently, there has been considerable interest in understanding brain networks in major depressive disorder (MDD). Neural pathways can be tracked in the living brain using diffusion-weighted imaging (DWI); graph theory can then be used to study properties of the resulting fiber networks. To date, global abnormalities have not been reported in tractography-based graph metrics in MDD, so we used a machine learning approach based on “support vector machines” to differentiate depressed from healthy individuals based on multiple brain network properties. We also assessed how important specific graph metrics were for this differentiation. Finally, we conducted a local graph analysis to identify abnormal connectivity at specific nodes of the network. We were able to classify depression using whole-brain graph metrics. Small-worldness was the most useful graph metric for classification. The right pars orbitalis, right inferior parietal cortex, and left rostral anterior cingulate all showed abnormal network connectivity in MDD. This is the first use of structural global graph metrics to classify depressed individuals. These findings highlight the importance of future research to understand network properties in depression across imaging modalities, improve classification results, and relate network alterations to psychiatric symptoms, medication, and comorbidities. PMID:25762941
Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H
2015-01-01
Recently, there has been considerable interest in understanding brain networks in major depressive disorder (MDD). Neural pathways can be tracked in the living brain using diffusion-weighted imaging (DWI); graph theory can then be used to study properties of the resulting fiber networks. To date, global abnormalities have not been reported in tractography-based graph metrics in MDD, so we used a machine learning approach based on "support vector machines" to differentiate depressed from healthy individuals based on multiple brain network properties. We also assessed how important specific graph metrics were for this differentiation. Finally, we conducted a local graph analysis to identify abnormal connectivity at specific nodes of the network. We were able to classify depression using whole-brain graph metrics. Small-worldness was the most useful graph metric for classification. The right pars orbitalis, right inferior parietal cortex, and left rostral anterior cingulate all showed abnormal network connectivity in MDD. This is the first use of structural global graph metrics to classify depressed individuals. These findings highlight the importance of future research to understand network properties in depression across imaging modalities, improve classification results, and relate network alterations to psychiatric symptoms, medication, and comorbidities.
Sigma Routing Metric for RPL Protocol
Rojas, Aldo; Fernandez, Luis
2018-01-01
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Decision-relevant evaluation of climate models: A case study of chill hours in California
NASA Astrophysics Data System (ADS)
Jagannathan, K. A.; Jones, A. D.; Kerr, A. C.
2017-12-01
The past decade has seen a proliferation of different climate datasets with over 60 climate models currently in use. Comparative evaluation and validation of models can assist practitioners chose the most appropriate models for adaptation planning. However, such assessments are usually conducted for `climate metrics' such as seasonal temperature, while sectoral decisions are often based on `decision-relevant outcome metrics' such as growing degree days or chill hours. Since climate models predict different metrics with varying skill, the goal of this research is to conduct a bottom-up evaluation of model skill for `outcome-based' metrics. Using chill hours (number of hours in winter months where temperature is lesser than 45 deg F) in Fresno, CA as a case, we assess how well different GCMs predict the historical mean and slope of chill hours, and whether and to what extent projections differ based on model selection. We then compare our results with other climate-based evaluations of the region, to identify similarities and differences. For the model skill evaluation, historically observed chill hours were compared with simulations from 27 GCMs (and multiple ensembles). Model skill scores were generated based on a statistical hypothesis test of the comparative assessment. Future projections from RCP 8.5 runs were evaluated, and a simple bias correction was also conducted. Our analysis indicates that model skill in predicting chill hour slope is dependent on its skill in predicting mean chill hours, which results from the non-linear nature of the chill metric. However, there was no clear relationship between the models that performed well for the chill hour metric and those that performed well in other temperature-based evaluations (such winter minimum temperature or diurnal temperature range). Further, contrary to conclusions from other studies, we also found that the multi-model mean or large ensemble mean results may not always be most appropriate for this outcome metric. Our assessment sheds light on key differences between global versus local skill, and broad versus specific skill of climate models, highlighting that decision-relevant model evaluation may be crucial for providing practitioners with the best available climate information for their specific needs.
RuleMonkey: software for stochastic simulation of rule-based models
2010-01-01
Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321
Metrics for measuring performance of market transformation initiatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, F.; Schlegel, J.; Grabner, K.
1998-07-01
Regulators have traditionally rewarded utility efficiency programs based on energy and demand savings. Now, many regulators are encouraging utilities and other program administrators to save energy by transforming markets. Prior to achieving sustainable market transformation, the program administrators often must take actions to understand the markets, establish baselines for success, reduce market barriers, build alliances, and build market momentum. Because these activities often precede savings, year-by-year measurement of savings can be an inappropriate measure of near-term success. Because ultimate success in transforming markets is defined in terms of sustainable changes in market structure and practice, traditional measures of success canmore » also be misleading as initiatives reach maturity. This paper reviews early efforts in Massachusetts to develop metrics, or yardsticks, to gauge regulatory rewards for utility market transformation initiatives. From experience in multiparty negotiations, the authors review options for metrics based alternatively on market effects, outcomes, and good faith implementation. Additionally, alternative approaches are explored, based on end-results, interim results, and initial results. The political and practical constraints are described which have thus far led to a preference for one-year metrics, based primarily on good faith implementation. Strategies are offered for developing useful metrics which might be acceptable to regulators, advocates, and program administrators. Finally, they emphasize that the use of market transformation performance metrics is in its infancy. Both regulators and program administrators are encouraged to advance into this area with an experimental mind-set; don't put all the money on one horse until there's more of a track record.« less
Voxel-based statistical analysis of uncertainties associated with deformable image registration
NASA Astrophysics Data System (ADS)
Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang
2013-09-01
Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.
Michael E. Goerndt; Vincente J. Monleon; Hailemariam. Temesgen
2010-01-01
Three sets of linear models were developed to predict several forest attributes, using stand-level and single-tree remote sensing (STRS) light detection and ranging (LiDAR) metrics as predictor variables. The first used only area-level metrics (ALM) associated with first-return height distribution, percentage of cover, and canopy transparency. The second alternative...
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Correlation between centrality metrics and their application to the opinion model
NASA Astrophysics Data System (ADS)
Li, Cong; Li, Qian; Van Mieghem, Piet; Stanley, H. Eugene; Wang, Huijuan
2015-03-01
In recent decades, a number of centrality metrics describing network properties of nodes have been proposed to rank the importance of nodes. In order to understand the correlations between centrality metrics and to approximate a high-complexity centrality metric by a strongly correlated low-complexity metric, we first study the correlation between centrality metrics in terms of their Pearson correlation coefficient and their similarity in ranking of nodes. In addition to considering the widely used centrality metrics, we introduce a new centrality measure, the degree mass. The mth-order degree mass of a node is the sum of the weighted degree of the node and its neighbors no further than m hops away. We find that the betweenness, the closeness, and the components of the principal eigenvector of the adjacency matrix are strongly correlated with the degree, the 1st-order degree mass and the 2nd-order degree mass, respectively, in both network models and real-world networks. We then theoretically prove that the Pearson correlation coefficient between the principal eigenvector and the 2nd-order degree mass is larger than that between the principal eigenvector and a lower order degree mass. Finally, we investigate the effect of the inflexible contrarians selected based on different centrality metrics in helping one opinion to compete with another in the inflexible contrarian opinion (ICO) model. Interestingly, we find that selecting the inflexible contrarians based on the leverage, the betweenness, or the degree is more effective in opinion-competition than using other centrality metrics in all types of networks. This observation is supported by our previous observations, i.e., that there is a strong linear correlation between the degree and the betweenness, as well as a high centrality similarity between the leverage and the degree.
2016-01-01
Background: The price of food has long been considered one of the major factors that affects food choices. However, the price metric (e.g., the price of food per calorie or the price of food per gram) that individuals predominantly use when making food choices is unclear. Understanding which price metric is used is especially important for studying individuals with severe budget constraints because food price then becomes even more important in food choice. Objective: We assessed which price metric is used by low-income individuals in deciding what to eat. Methods: With the use of data from NHANES and the USDA Food and Nutrient Database for Dietary Studies, we created an agent-based model that simulated an environment representing the US population, wherein individuals were modeled as agents with a specific weight, age, and income. In our model, agents made dietary food choices while meeting their budget limits with the use of 1 of 3 different metrics for decision making: energy cost (price per calorie), unit price (price per gram), and serving price (price per serving). The food consumption patterns generated by our model were compared to 3 independent data sets. Results: The food choice behaviors observed in 2 of the data sets were found to be closest to the simulated dietary patterns generated by the price per calorie metric. The behaviors observed in the third data set were equidistant from the patterns generated by price per calorie and price per serving metrics, whereas results generated by the price per gram metric were further away. Conclusions: Our simulations suggest that dietary food choice based on price per calorie best matches actual consumption patterns and may therefore be the most salient price metric for low-income populations. PMID:27655757
Beheshti, Rahmatollah; Igusa, Takeru; Jones-Smith, Jessica
2016-11-01
The price of food has long been considered one of the major factors that affects food choices. However, the price metric (e.g., the price of food per calorie or the price of food per gram) that individuals predominantly use when making food choices is unclear. Understanding which price metric is used is especially important for studying individuals with severe budget constraints because food price then becomes even more important in food choice. We assessed which price metric is used by low-income individuals in deciding what to eat. With the use of data from NHANES and the USDA Food and Nutrient Database for Dietary Studies, we created an agent-based model that simulated an environment representing the US population, wherein individuals were modeled as agents with a specific weight, age, and income. In our model, agents made dietary food choices while meeting their budget limits with the use of 1 of 3 different metrics for decision making: energy cost (price per calorie), unit price (price per gram), and serving price (price per serving). The food consumption patterns generated by our model were compared to 3 independent data sets. The food choice behaviors observed in 2 of the data sets were found to be closest to the simulated dietary patterns generated by the price per calorie metric. The behaviors observed in the third data set were equidistant from the patterns generated by price per calorie and price per serving metrics, whereas results generated by the price per gram metric were further away. Our simulations suggest that dietary food choice based on price per calorie best matches actual consumption patterns and may therefore be the most salient price metric for low-income populations. © 2016 American Society for Nutrition.
Carroll, Carlos; Roberts, David R; Michalak, Julia L; Lawler, Joshua J; Nielsen, Scott E; Stralberg, Diana; Hamann, Andreas; Mcrae, Brad H; Wang, Tongli
2017-11-01
As most regions of the earth transition to altered climatic conditions, new methods are needed to identify refugia and other areas whose conservation would facilitate persistence of biodiversity under climate change. We compared several common approaches to conservation planning focused on climate resilience over a broad range of ecological settings across North America and evaluated how commonalities in the priority areas identified by different methods varied with regional context and spatial scale. Our results indicate that priority areas based on different environmental diversity metrics differed substantially from each other and from priorities based on spatiotemporal metrics such as climatic velocity. Refugia identified by diversity or velocity metrics were not strongly associated with the current protected area system, suggesting the need for additional conservation measures including protection of refugia. Despite the inherent uncertainties in predicting future climate, we found that variation among climatic velocities derived from different general circulation models and emissions pathways was less than the variation among the suite of environmental diversity metrics. To address uncertainty created by this variation, planners can combine priorities identified by alternative metrics at a single resolution and downweight areas of high variation between metrics. Alternately, coarse-resolution velocity metrics can be combined with fine-resolution diversity metrics in order to leverage the respective strengths of the two groups of metrics as tools for identification of potential macro- and microrefugia that in combination maximize both transient and long-term resilience to climate change. Planners should compare and integrate approaches that span a range of model complexity and spatial scale to match the range of ecological and physical processes influencing persistence of biodiversity and identify a conservation network resilient to threats operating at multiple scales. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate
NASA Astrophysics Data System (ADS)
Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.
2017-12-01
Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
Garcia Castro, Leyla Jael; Berlanga, Rafael; Garcia, Alexander
2015-10-01
Although full-text articles are provided by the publishers in electronic formats, it remains a challenge to find related work beyond the title and abstract context. Identifying related articles based on their abstract is indeed a good starting point; this process is straightforward and does not consume as many resources as full-text based similarity would require. However, further analyses may require in-depth understanding of the full content. Two articles with highly related abstracts can be substantially different regarding the full content. How similarity differs when considering title-and-abstract versus full-text and which semantic similarity metric provides better results when dealing with full-text articles are the main issues addressed in this manuscript. We have benchmarked three similarity metrics - BM25, PMRA, and Cosine, in order to determine which one performs best when using concept-based annotations on full-text documents. We also evaluated variations in similarity values based on title-and-abstract against those relying on full-text. Our test dataset comprises the Genomics track article collection from the 2005 Text Retrieval Conference. Initially, we used an entity recognition software to semantically annotate titles and abstracts as well as full-text with concepts defined in the Unified Medical Language System (UMLS®). For each article, we created a document profile, i.e., a set of identified concepts, term frequency, and inverse document frequency; we then applied various similarity metrics to those document profiles. We considered correlation, precision, recall, and F1 in order to determine which similarity metric performs best with concept-based annotations. For those full-text articles available in PubMed Central Open Access (PMC-OA), we also performed dispersion analyses in order to understand how similarity varies when considering full-text articles. We have found that the PubMed Related Articles similarity metric is the most suitable for full-text articles annotated with UMLS concepts. For similarity values above 0.8, all metrics exhibited an F1 around 0.2 and a recall around 0.1; BM25 showed the highest precision close to 1; in all cases the concept-based metrics performed better than the word-stem-based one. Our experiments show that similarity values vary when considering only title-and-abstract versus full-text similarity. Therefore, analyses based on full-text become useful when a given research requires going beyond title and abstract, particularly regarding connectivity across articles. Visualization available at ljgarcia.github.io/semsim.benchmark/, data available at http://dx.doi.org/10.5281/zenodo.13323. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Carver, Gary P.
1994-05-01
The federal agencies are working with industry to ease adoption of the metric system. The goal is to help U.S. industry compete more successfully in the global marketplace, increase exports, and create new jobs. The strategy is to use federal procurement, financial assistance, and other business-related activities to encourage voluntary conversion. Based upon the positive experiences of firms and industries that have converted, federal agencies have concluded that metric use will yield long-term benefits that are beyond any one-time costs or inconveniences. It may be time for additional steps to move the Nation out of its dual-system comfort zone and continue to progress toward metrication. This report includes 'Metric Highlights in U.S. History'.
Calculation and use of an environment's characteristic software metric set
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Selby, Richard W., Jr.
1985-01-01
Since both cost/quality and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. The approach is applied in the Software Engineering Laboratory (SEL), a NASA Goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 to 112,000 lines) projects. For this particular environment, the method yielded the characteristic metric set (source lines, fault correction effort per executable statement, design effort, code effort, number of I/O parameters, number of versions). The uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data.
Initial Ada components evaluation
NASA Technical Reports Server (NTRS)
Moebes, Travis
1989-01-01
The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.
Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F
2015-07-10
Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.
Yan, Chao-Gan; Cheung, Brian; Kelly, Clare; Colcombe, Stan; Craddock, R. Cameron; Di Martino, Adriana; Li, Qingyang; Zuo, Xi-Nian; Castellanos, F. Xavier; Milham, Michael P.
2014-01-01
Functional connectomics is one of the most rapidly expanding areas of neuroimaging research. Yet, concerns remain regarding the use of resting-state fMRI (R-fMRI) to characterize inter-individual variation in the functional connectome. In particular, recent findings that “micro” head movements can introduce artifactual inter-individual and group-related differences in R-fMRI metrics have raised concerns. Here, we first build on prior demonstrations of regional variation in the magnitude of framewise displacements associated with a given head movement, by providing a comprehensive voxel-based examination of the impact of motion on the BOLD signal (i.e., motion-BOLD relationships). Positive motion-BOLD relationships were detected in primary and supplementary motor areas, particularly in low motion datasets. Negative motion-BOLD relationships were most prominent in prefrontal regions, and expanded throughout the brain in high motion datasets (e.g., children). Scrubbing of volumes with FD > 0.2 effectively removed negative but not positive correlations; these findings suggest that positive relationships may reflect neural origins of motion while negative relationships are likely to originate from motion artifact. We also examined the ability of motion correction strategies to eliminate artifactual differences related to motion among individuals and between groups for a broad array of voxel-wise R-fMRI metrics. Residual relationships between motion and the examined R-fMRI metrics remained for all correction approaches, underscoring the need to covary motion effects at the group-level. Notably, global signal regression reduced relationships between motion and inter-individual differences in correlation-based R-fMRI metrics; Z-standardization (mean-centering and variance normalization) of subject-level maps for R-fMRI metrics prior to group-level analyses demonstrated similar advantages. Finally, our test-retest (TRT) analyses revealed significant motion effects on TRT reliability for R-fMRI metrics. Generally, motion compromised reliability of R-fMRI metrics, with the exception of those based on frequency characteristics – particularly, amplitude of low frequency fluctuations (ALFF). The implications of our findings for decision-making regarding the assessment and correction of motion are discussed, as are insights into potential differences among volume-based metrics of motion. PMID:23499792
Selecting a Free Web-Hosted Survey Tool for Student Use
ERIC Educational Resources Information Center
Elbeck, Matt
2014-01-01
This study provides marketing educators a review of free web-based survey services and guidance for student use. A mixed methods approach started with online searches and metrics identifying 13 free web-hosted survey services, described as demonstration or project tools, and ranked using popularity and importance web-based metrics. For each…
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
Shaikh, Faiq; Hendrata, Kenneth; Kolowitz, Brian; Awan, Omer; Shrestha, Rasu; Deible, Christopher
2017-06-01
In the era of value-based healthcare, many aspects of medical care are being measured and assessed to improve quality and reduce costs. Radiology adds enormously to health care costs and is under pressure to adopt a more efficient system that incorporates essential metrics to assess its value and impact on outcomes. Most current systems tie radiologists' incentives and evaluations to RVU-based productivity metrics and peer-review-based quality metrics. In a new potential model, a radiologist's performance will have to increasingly depend on a number of parameters that define "value," beginning with peer review metrics that include referrer satisfaction and feedback from radiologists to the referring physician that evaluates the potency and validity of clinical information provided for a given study. These new dimensions of value measurement will directly impact the cascade of further medical management. We share our continued experience with this project that had two components: RESP (Referrer Evaluation System Pilot) and FRACI (Feedback from Radiologist Addressing Confounding Issues), which were introduced to the clinical radiology workflow in order to capture referrer-based and radiologist-based feedback on radiology reporting. We also share our insight into the principles of design thinking as applied in its planning and execution.
NASA Astrophysics Data System (ADS)
Kwakkel, Jan; Haasnoot, Marjolijn
2015-04-01
In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.
40 CFR Table N-1 to Subpart N of... - CO2 Emission Factors for Carbonate-Based Raw Materials
Code of Federal Regulations, 2011 CFR
2011-07-01
...-Based Raw Materials N Table N-1 to Subpart N of Part 98 Protection of Environment ENVIRONMENTAL... Raw Materials Carbonate-basedraw material—mineral CO2 emission factor a Limestone—CaCO3 0.440 Dolomite... in units of metric tons of CO2 emitted per metric ton of carbonate-based raw material charged to the...
40 CFR Table Mm-2 to Subpart Mm of... - Default Factors for Biomass-Based Fuels and Biomass
Code of Federal Regulations, 2011 CFR
2011-07-01
... Fuels and Biomass MM Table MM-2 to Subpart MM of Part 98 Protection of Environment ENVIRONMENTAL... Biomass-Based Fuels and Biomass Biomass-based fuel and biomass Column A:Density (metric tons/bbl) Column B: Carbon share(% of mass) Column C:Emission factor (metric tons CO2/bbl) Ethanol (100%) 0.1267 52.14 0.2422...
40 CFR Table Mm-2 to Subpart Mm of... - Default Factors for Biomass-Based Fuels and Biomass
Code of Federal Regulations, 2013 CFR
2013-07-01
... Fuels and Biomass MM Table MM-2 to Subpart MM of Part 98 Protection of Environment ENVIRONMENTAL... Biomass-Based Fuels and Biomass Biomass-based fuel and biomass Column A:Density (metric tons/bbl) Column B: Carbon share(% of mass) Column C:Emission factor (metric tons CO2/bbl) Ethanol (100%) 0.1267 52.14 0.2422...
40 CFR Table Mm-2 to Subpart Mm of... - Default Factors for Biomass-Based Fuels and Biomass
Code of Federal Regulations, 2014 CFR
2014-07-01
... Fuels and Biomass MM Table MM-2 to Subpart MM of Part 98 Protection of Environment ENVIRONMENTAL... Biomass-Based Fuels and Biomass Biomass-based fuel and biomass Column A:Density (metric tons/bbl) Column B: Carbon share(% of mass) Column C:Emission factor (metric tons CO2/bbl) Ethanol (100%) 0.1267 52.14 0.2422...
40 CFR Table Mm-2 to Subpart Mm of... - Default Factors for Biomass-Based Fuels and Biomass
Code of Federal Regulations, 2012 CFR
2012-07-01
... Fuels and Biomass MM Table MM-2 to Subpart MM of Part 98 Protection of Environment ENVIRONMENTAL... Biomass-Based Fuels and Biomass Biomass-based fuel and biomass Column A:Density (metric tons/bbl) Column B: Carbon share(% of mass) Column C:Emission factor (metric tons CO2/bbl) Ethanol (100%) 0.1267 52.14 0.2422...
Rule groupings: An approach towards verification of expert systems
NASA Technical Reports Server (NTRS)
Mehrotra, Mala
1991-01-01
Knowledge-based expert systems are playing an increasingly important role in NASA space and aircraft systems. However, many of NASA's software applications are life- or mission-critical and knowledge-based systems do not lend themselves to the traditional verification and validation techniques for highly reliable software. Rule-based systems lack the control abstractions found in procedural languages. Hence, it is difficult to verify or maintain such systems. Our goal is to automatically structure a rule-based system into a set of rule-groups having a well-defined interface to other rule-groups. Once a rule base is decomposed into such 'firewalled' units, studying the interactions between rules would become more tractable. Verification-aid tools can then be developed to test the behavior of each such rule-group. Furthermore, the interactions between rule-groups can be studied in a manner similar to integration testing. Such efforts will go a long way towards increasing our confidence in the expert-system software. Our research efforts address the feasibility of automating the identification of rule groups, in order to decompose the rule base into a number of meaningful units.
Zone calculation as a tool for assessing performance outcome in laparoscopic suturing.
Buckley, Christina E; Kavanagh, Dara O; Nugent, Emmeline; Ryan, Donncha; Traynor, Oscar J; Neary, Paul C
2015-06-01
Simulator performance is measured by metrics, which are valued as an objective way of assessing trainees. Certain procedures such as laparoscopic suturing, however, may not be suitable for assessment under traditionally formulated metrics. Our aim was to assess if our new metric is a valid method of assessing laparoscopic suturing. A software program was developed to order to create a new metric, which would calculate the percentage of time spent operating within pre-defined areas called "zones." Twenty-five candidates (medical students N = 10, surgical residents N = 10, and laparoscopic experts N = 5) performed the laparoscopic suturing task on the ProMIS III(®) simulator. New metrics of "in-zone" and "out-zone" scores as well as traditional metrics of time, path length, and smoothness were generated. Performance was also assessed by two blinded observers using the OSATS and FLS rating scales. This novel metric was evaluated by comparing it to both traditional metrics and subjective scores. There was a significant difference in the average in-zone and out-zone scores between all three experience groups (p < 0.05). The new zone metrics scores correlated significantly with the subjective-blinded observer scores of OSATS and FLS (p = 0.0001). The new zone metric scores also correlated significantly with the traditional metrics of path length, time, and smoothness (p < 0.05). The new metric is a valid tool for assessing laparoscopic suturing objectively. This could be incorporated into a competency-based curriculum to monitor resident progression in the simulated setting.
Towards the XML schema measurement based on mapping between XML and OO domain
NASA Astrophysics Data System (ADS)
Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja
2017-07-01
Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
Left-invariant Einstein metrics on S3 ×S3
NASA Astrophysics Data System (ADS)
Belgun, Florin; Cortés, Vicente; Haupt, Alexander S.; Lindemann, David
2018-06-01
The classification of homogeneous compact Einstein manifolds in dimension six is an open problem. We consider the remaining open case, namely left-invariant Einstein metrics g on G = SU(2) × SU(2) =S3 ×S3. Einstein metrics are critical points of the total scalar curvature functional for fixed volume. The scalar curvature S of a left-invariant metric g is constant and can be expressed as a rational function in the parameters determining the metric. The critical points of S, subject to the volume constraint, are given by the zero locus of a system of polynomials in the parameters. In general, however, the determination of the zero locus is apparently out of reach. Instead, we consider the case where the isotropy group K of g in the group of motions is non-trivial. When K ≇Z2 we prove that the Einstein metrics on G are given by (up to homothety) either the standard metric or the nearly Kähler metric, based on representation-theoretic arguments and computer algebra. For the remaining case K ≅Z2 we present partial results.
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
NASA Technical Reports Server (NTRS)
Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.;
2011-01-01
In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).
Xue, Xiaobo; Schoen, Mary E; Ma, Xin Cissy; Hawkins, Troy R; Ashbolt, Nicholas J; Cashdollar, Jennifer; Garland, Jay
2015-06-15
Planning for sustainable community water systems requires a comprehensive understanding and assessment of the integrated source-drinking-wastewater systems over their life-cycles. Although traditional life cycle assessment and similar tools (e.g. footprints and emergy) have been applied to elements of these water services (i.e. water resources, drinking water, stormwater or wastewater treatment alone), we argue for the importance of developing and combining the system-based tools and metrics in order to holistically evaluate the complete water service system based on the concept of integrated resource management. We analyzed the strengths and weaknesses of key system-based tools and metrics, and discuss future directions to identify more sustainable municipal water services. Such efforts may include the need for novel metrics that address system adaptability to future changes and infrastructure robustness. Caution is also necessary when coupling fundamentally different tools so to avoid misunderstanding and consequently misleading decision-making. Published by Elsevier Ltd.
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B
2014-01-01
Although much attention has recently been focused on single-subject functional networks, using methods such as resting-state functional MRI, methods for constructing single-subject structural networks are in their infancy. Single-subject cortical networks aim to describe the self-similarity across the cortical structure, possibly signifying convergent developmental pathways. Previous methods for constructing single-subject cortical networks have used patch-based correlations and distance metrics based on curvature and thickness. We present here a method for constructing similarity-based cortical structural networks that utilizes a rotation-invariant representation of structure. The resulting graph metrics are closely linked to age and indicate an increasing degree of closeness throughout development in nearly all brain regions, perhaps corresponding to a more regular structure as the brain matures. The derived graph metrics demonstrate a four-fold increase in power for detecting age as compared to cortical thickness. This proof of concept study indicates that the proposed metric may be useful in identifying biologically relevant cortical patterns.
Adaptive data-driven models for estimating carbon fluxes in the Northern Great Plains
Wylie, B.K.; Fosnight, E.A.; Gilmanov, T.G.; Frank, A.B.; Morgan, J.A.; Haferkamp, Marshall R.; Meyers, T.P.
2007-01-01
Rangeland carbon fluxes are highly variable in both space and time. Given the expansive areas of rangelands, how rangelands respond to climatic variation, management, and soil potential is important to understanding carbon dynamics. Rangeland carbon fluxes associated with Net Ecosystem Exchange (NEE) were measured from multiple year data sets at five flux tower locations in the Northern Great Plains. These flux tower measurements were combined with 1-km2 spatial data sets of Photosynthetically Active Radiation (PAR), Normalized Difference Vegetation Index (NDVI), temperature, precipitation, seasonal NDVI metrics, and soil characteristics. Flux tower measurements were used to train and select variables for a rule-based piece-wise regression model. The accuracy and stability of the model were assessed through random cross-validation and cross-validation by site and year. Estimates of NEE were produced for each 10-day period during each growing season from 1998 to 2001. Growing season carbon flux estimates were combined with winter flux estimates to derive and map annual estimates of NEE. The rule-based piece-wise regression model is a dynamic, adaptive model that captures the relationships of the spatial data to NEE as conditions evolve throughout the growing season. The carbon dynamics in the Northern Great Plains proved to be in near equilibrium, serving as a small carbon sink in 1999 and as a small carbon source in 1998, 2000, and 2001. Patterns of carbon sinks and sources are very complex, with the carbon dynamics tilting toward sources in the drier west and toward sinks in the east and near the mountains in the extreme west. Significant local variability exists, which initial investigations suggest are likely related to local climate variability, soil properties, and management.
Ostovaneh, Mohammad R; Vavere, Andrea L; Mehra, Vishal C; Kofoed, Klaus F; Matheson, Matthew B; Arbab-Zadeh, Armin; Fujisawa, Yasuko; Schuijf, Joanne D; Rochitte, Carlos E; Scholte, Arthur J; Kitagawa, Kakuya; Dewey, Marc; Cox, Christopher; DiCarli, Marcelo F; George, Richard T; Lima, Joao A C
To determine the diagnostic accuracy of semi-automatic quantitative metrics compared to expert reading for interpretation of computed tomography perfusion (CTP) imaging. The CORE320 multicenter diagnostic accuracy clinical study enrolled patients between 45 and 85 years of age who were clinically referred for invasive coronary angiography (ICA). Computed tomography angiography (CTA), CTP, single photon emission computed tomography (SPECT), and ICA images were interpreted manually in blinded core laboratories by two experienced readers. Additionally, eight quantitative CTP metrics as continuous values were computed semi-automatically from myocardial and blood attenuation and were combined using logistic regression to derive a final quantitative CTP metric score. For the reference standard, hemodynamically significant coronary artery disease (CAD) was defined as a quantitative ICA stenosis of 50% or greater and a corresponding perfusion defect by SPECT. Diagnostic accuracy was determined by area under the receiver operating characteristic curve (AUC). Of the total 377 included patients, 66% were male, median age was 62 (IQR: 56, 68) years, and 27% had prior myocardial infarction. In patient based analysis, the AUC (95% CI) for combined CTA-CTP expert reading and combined CTA-CTP semi-automatic quantitative metrics was 0.87(0.84-0.91) and 0.86 (0.83-0.9), respectively. In vessel based analyses the AUC's were 0.85 (0.82-0.88) and 0.84 (0.81-0.87), respectively. No significant difference in AUC was found between combined CTA-CTP expert reading and CTA-CTP semi-automatic quantitative metrics in patient based or vessel based analyses(p > 0.05 for all). Combined CTA-CTP semi-automatic quantitative metrics is as accurate as CTA-CTP expert reading to detect hemodynamically significant CAD. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less
Impact of region contouring variability on image-based focal therapy evaluation
NASA Astrophysics Data System (ADS)
Gibson, Eli; Donaldson, Ian A.; Shah, Taimur T.; Hu, Yipeng; Ahmed, Hashim U.; Barratt, Dean C.
2016-03-01
Motivation: Focal therapy is an emerging low-morbidity treatment option for low-intermediate risk prostate cancer; however, challenges remain in accurately delivering treatment to specified targets and determining treatment success. Registered multi-parametric magnetic resonance imaging (MPMRI) acquired before and after treatment can support focal therapy evaluation and optimization; however, contouring variability, when defining the prostate, the clinical target volume (CTV) and the ablation region in images, reduces the precision of quantitative image-based focal therapy evaluation metrics. To inform the interpretation and clarify the limitations of such metrics, we investigated inter-observer contouring variability and its impact on four metrics. Methods: Pre-therapy and 2-week-post-therapy standard-of-care MPMRI were acquired from 5 focal cryotherapy patients. Two clinicians independently contoured, on each slice, the prostate (pre- and post-treatment) and the dominant index lesion CTV (pre-treatment) in the T2-weighted MRI, and the ablated region (post-treatment) in the dynamic-contrast- enhanced MRI. For each combination of clinician contours, post-treatment images were registered to pre-treatment images using a 3D biomechanical-model-based registration of prostate surfaces, and four metrics were computed: the proportion of the target tissue region that was ablated and the target:ablated region volume ratio for each of two targets (the CTV and an expanded planning target volume). Variance components analysis was used to measure the contribution of each type of contour to the variance in the therapy evaluation metrics. Conclusions: 14-23% of evaluation metric variance was attributable to contouring variability (including 6-12% from ablation region contouring); reducing this variability could improve the precision of focal therapy evaluation metrics.
Jarc, Anthony M; Curet, Myriam J
2017-03-01
Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
The metrics and correlates of physician migration from Africa.
Arah, Onyebuchi A
2007-05-17
Physician migration from poor to rich countries is considered an important contributor to the growing health workforce crisis in the developing world. This is particularly true for Africa. The perceived magnitude of such migration for each source country might, however, depend on the choice of metrics used in the analysis. This study examined the influence of choice of migration metrics on the rankings of African countries that suffered the most physician migration, and investigated the correlates of physician migration. Ranking and correlational analyses were conducted on African physician migration data adjusted for bilateral net flows, and supplemented with developmental, economic and health system data. The setting was the 53 African birth countries of African-born physicians working in nine wealthier destination countries. Three metrics of physician migration were used: total number of physician émigrés; emigration fraction defined as the proportion of the potential physician pool working in destination countries; and physician migration density defined as the number of physician émigrés per 1000 population of the African source country. Rankings based on any of the migration metrics differed substantially from those based on the other two metrics. Although the emigration fraction and physician migration density metrics gave proportionality to the migration crisis, only the latter was consistently associated with source countries' workforce capacity, health, health spending, economic and development characteristics. As such, higher physician migration density was seen among African countries with relatively higher health workforce capacity (0.401 < or = r < or = 0.694, p < or = 0.011), health status, health spending, and development. The perceived magnitude of physician migration is sensitive to the choice of metrics. Complementing the emigration fraction, the physician migration density is a metric which gives a different but proportionate picture of which African countries stand to lose relatively more of its physicians with unchecked migration. The nature of health policies geared at health-worker migration can be expected to depend on the choice of migration metrics.
NASA Astrophysics Data System (ADS)
von Schneidemesser, E.; Schmale, J.; Van Aardenne, J.
2013-12-01
Air pollution and climate change are often treated at national and international level as separate problems under different regulatory or thematic frameworks and different policy departments. With air pollution and climate change being strongly linked with regard to their causes, effects and mitigation options, the integration of policies that steer air pollutant and greenhouse gas emission reductions might result in cost-efficient, more effective and thus more sustainable tackling of the two problems. To support informed decision making and to work towards an integrated air quality and climate change mitigation policy requires the identification, quantification and communication of present-day and potential future co-benefits and trade-offs. The identification of co-benefits and trade-offs requires the application of appropriate metrics that are well rooted in science, easy to understand and reflect the needs of policy, industry and the public for informed decision making. For the purpose of this workshop, metrics were loosely defined as a quantified measure of effect or impact used to inform decision-making and to evaluate mitigation measures. The workshop held on October 9 and 10 and co-organized between the European Environment Agency and the Institute for Advanced Sustainability Studies brought together representatives from science, policy, NGOs, and industry to discuss whether current available metrics are 'fit for purpose' or whether there is a need to develop alternative metrics or reassess the way current metrics are used and communicated. Based on the workshop outcome the presentation will (a) summarize the informational needs and current application of metrics by the end-users, who, depending on their field and area of operation might require health, policy, and/or economically relevant parameters at different scales, (b) provide an overview of the state of the science of currently used and newly developed metrics, and the scientific validity of these metrics, (c) identify gaps in the current information base, whether from the scientific development of metrics or their application by different users.
GRC GSFC TDRSS Waveform Metrics Report
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.
2013-01-01
The report presents software metrics and porting metrics for the GGT Waveform. The porting was from a ground-based COTS SDR, the SDR-3000, to the CoNNeCT JPL SDR. The report does not address any of the Operating Environment (OE) software development, nor the original TDRSS waveform development at GSFC for the COTS SDR. With regard to STRS, the report presents compliance data and lessons learned.
Improving Department of Defense Global Distribution Performance Through Network Analysis
2016-06-01
network performance increase. 14. SUBJECT TERMS supply chain metrics, distribution networks, requisition shipping time, strategic distribution database...peace and war” (p. 4). USTRANSCOM Metrics and Analysis Branch defines, develops, tracks, and maintains outcomes- based supply chain metrics to...2014a, p. 8). The Joint Staff defines a TDD standard as the maximum number of days the supply chain can take to deliver requisitioned materiel
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smyth, G; Bamber, JC; Bedford, JL
2015-06-15
Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstemmore » (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.« less
NASA Astrophysics Data System (ADS)
Shao, G.; Gallion, J.; Fei, S.
2016-12-01
Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.
Discrete tyre model application for evaluation of vehicle limit handling performance
NASA Astrophysics Data System (ADS)
Siramdasu, Y.; Taheri, S.
2016-11-01
The goal of this study is twofold, first, to understand the transient and nonlinear effects of anti-lock braking systems (ABS), road undulations and driving dynamics on lateral performance of tyre and second, to develop objective handling manoeuvres and respective metrics to characterise these effects on vehicle behaviour. For studying the transient and nonlinear handling performance of the vehicle, the variations of relaxation length of tyre and tyre inertial properties play significant roles [Pacejka HB. Tire and vehicle dynamics. 3rd ed. Butterworth-Heinemann; 2012]. To accurately simulate these nonlinear effects during high-frequency vehicle dynamic manoeuvres, requires a high-frequency dynamic tyre model (? Hz). A 6 DOF dynamic tyre model integrated with enveloping model is developed and validated using fixed axle high-speed oblique cleat experimental data. Commercially available vehicle dynamics software CarSim® is used for vehicle simulation. The vehicle model was validated by comparing simulation results with experimental sinusoidal steering tests. The validated tyre model is then integrated with vehicle model and a commercial grade rule-based ABS model to perform various objective simulations. Two test scenarios of ABS braking in turn on a smooth road and accelerating in a turn on uneven and smooth roads are considered. Both test cases reiterated that while the tyre is operating in the nonlinear region of slip or slip angle, any road disturbance or high-frequency brake torque input variations can excite the inertial belt vibrations of the tyre. It is shown that these inertial vibrations can directly affect the developed performance metrics and potentially degrade the handling performance of the vehicle.
Design, Simulation and Fabrication of Triaxial MEMS High Shock Accelerometer.
Zhang, Zhenhai; Shi, Zhiguo; Yang, Zhan; Xie, Zhihong; Zhang, Donghong; Cai, De; Li, Kejie; Shen, Yajing
2015-04-01
On the basis of analyzing the disadvantage of other structural accelerometer, three-axis high g MEMS piezoresistive accelerometer was put forward in order to apply to the high-shock test field. The accelerometer's structure and working principle were discussed in details. The simulation results show that three-axis high shock MEMS accelerometer can bear high shock. After bearing high shock impact in high-shock shooting test, three-axis high shock MEMS accelerometer can obtain the intact metrical information of the penetration process and still guarantee the accurate precision of measurement in high shock load range, so we can not only analyze the law of stress wave spreading and the penetration rule of the penetration process of the body of the missile, but also furnish the testing technology of the burst point controlling. The accelerometer has far-ranging application in recording the typical data that projectile penetrating hard target and furnish both technology guarantees for penetration rule and defend engineering.
Spatial modelling of landscape aesthetic potential in urban-rural fringes.
Sahraoui, Yohan; Clauzel, Céline; Foltête, Jean-Christophe
2016-10-01
The aesthetic potential of landscape has to be modelled to provide tools for land-use planning. This involves identifying landscape attributes and revealing individuals' landscape preferences. Landscape aesthetic judgments of individuals (n = 1420) were studied by means of a photo-based survey. A set of landscape visibility metrics was created to measure landscape composition and configuration in each photograph using spatial data. These metrics were used as explanatory variables in multiple linear regressions to explain aesthetic judgments. We demonstrate that landscape aesthetic judgments may be synthesized in three consensus groups. The statistical results obtained show that landscape visibility metrics have good explanatory power. Ultimately, we propose a spatial modelling of landscape aesthetic potential based on these results combined with systematic computation of visibility metrics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Resilience Metrics for the Electric Power System: A Performance-Based Approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto
Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
Large margin nearest neighbor classifiers.
Domeniconi, Carlotta; Gunopulos, Dimitrios; Peng, Jing
2005-07-01
The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data.
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Darrow, Lyndsey A; Klein, Mitchel; Sarnat, Jeremy A; Mulholland, James A; Strickland, Matthew J; Sarnat, Stefanie E; Russell, Armistead G; Tolbert, Paige E
2011-01-01
Various temporal metrics of daily pollution levels have been used to examine the relationships between air pollutants and acute health outcomes. However, daily metrics of the same pollutant have rarely been systematically compared within a study. In this analysis, we describe the variability of effect estimates attributable to the use of different temporal metrics of daily pollution levels. We obtained hourly measurements of ambient particulate matter (PM₂.₅), carbon monoxide (CO), nitrogen dioxide (NO₂), and ozone (O₃) from air monitoring networks in 20-county Atlanta for the time period 1993-2004. For each pollutant, we created (1) a daily 1-h maximum; (2) a 24-h average; (3) a commute average; (4) a daytime average; (5) a nighttime average; and (6) a daily 8-h maximum (only for O₃). Using Poisson generalized linear models, we examined associations between daily counts of respiratory emergency department visits and the previous day's pollutant metrics. Variability was greatest across O₃ metrics, with the 8-h maximum, 1-h maximum, and daytime metrics yielding strong positive associations and the nighttime O₃ metric yielding a negative association (likely reflecting confounding by air pollutants oxidized by O₃). With the exception of daytime metric, all of the CO and NO₂ metrics were positively associated with respiratory emergency department visits. Differences in observed associations with respiratory emergency room visits among temporal metrics of the same pollutant were influenced by the diurnal patterns of the pollutant, spatial representativeness of the metrics, and correlation between each metric and copollutant concentrations. Overall, the use of metrics based on the US National Ambient Air Quality Standards (for example, the use of a daily 8-h maximum O₃ as opposed to a 24-h average metric) was supported by this analysis. Comparative analysis of temporal metrics also provided insight into underlying relationships between specific air pollutants and respiratory health.
Concussion Incidence in Professional Football
Nathanson, John T.; Connolly, James G.; Yuk, Frank; Gometz, Alex; Rasouli, Jonathan; Lovell, Mark; Choudhri, Tanvir
2016-01-01
Background: In the United States alone, millions of athletes participate in sports with potential for head injury each year. Although poorly understood, possible long-term neurological consequences of repetitive sports-related concussions have received increased recognition and attention in recent years. A better understanding of the risk factors for concussion remains a public health priority. Despite the attention focused on mild traumatic brain injury (mTBI) in football, gaps remain in the understanding of the optimal methodology to determine concussion incidence and position-specific risk factors. Purpose: To calculate the rates of concussion in professional football players using established and novel metrics on a group and position-specific basis. Study Design: Case-control study; Level of evidence, 3. Methods: Athletes from the 2012-2013 and 2013-2014 National Football League (NFL) seasons were included in this analysis of publicly available data. Concussion incidence rates were analyzed using established (athlete exposure [AE], game position [GP]) and novel (position play [PP]) metrics cumulatively, by game unit and position type (offensive skill players and linemen, defensive skill players and linemen), and by position. Results: In 480 games, there were 292 concussions, resulting in 0.61 concussions per game (95% CI, 0.54-0.68), 6.61 concussions per 1000 AEs (95% CI, 5.85-7.37), 1.38 concussions per 100 GPs (95% CI, 1.22-1.54), and 0.17 concussions per 1000 PPs (95% CI, 0.15-0.19). Depending on the method of calculation, the relative order of at-risk positions changed. In addition, using the PP metric, offensive skill players had a significantly greater rate of concussion than offensive linemen, defensive skill players, and defensive linemen (P < .05). Conclusion: For this study period, concussion incidence by position and unit varied depending on which metric was used. Compared with AE and GP, the PP metric found that the relative risk of concussion for offensive skill players was significantly greater than other position types. The strengths and limitations of various concussion incidence metrics need further evaluation. Clinical Relevance: A better understanding of the relative risks of the different positions/units is needed to help athletes, team personnel, and medical staff make optimal player safety decisions and enhance rules and equipment. PMID:26848481
Resistance and Security Index of Networks: Structural Information Perspective of Network Security
NASA Astrophysics Data System (ADS)
Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng
2016-06-01
Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.
Resistance and Security Index of Networks: Structural Information Perspective of Network Security.
Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng
2016-06-03
Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks.
Resistance and Security Index of Networks: Structural Information Perspective of Network Security
Li, Angsheng; Hu, Qifu; Liu, Jun; Pan, Yicheng
2016-01-01
Recently, Li and Pan defined the metric of the K-dimensional structure entropy of a structured noisy dataset G to be the information that controls the formation of the K-dimensional structure of G that is evolved by the rules, order and laws of G, excluding the random variations that occur in G. Here, we propose the notion of resistance of networks based on the one- and two-dimensional structural information of graphs. Given a graph G, we define the resistance of G, written , as the greatest overall number of bits required to determine the code of the module that is accessible via random walks with stationary distribution in G, from which the random walks cannot escape. We show that the resistance of networks follows the resistance law of networks, that is, for a network G, the resistance of G is , where and are the one- and two-dimensional structure entropies of G, respectively. Based on the resistance law, we define the security index of a network G to be the normalised resistance of G, that is, . We show that the resistance and security index are both well-defined measures for the security of the networks. PMID:27255783
Comparison of physically- and economically-based CO2-equivalences for methane
NASA Astrophysics Data System (ADS)
Boucher, O.
2012-05-01
There is a controversy on the role methane (and other short-lived species) should play in climate mitigation policies, and there is no consensus on what an optimal methane CO2-equivalence should be. We revisit this question by discussing some aspects of physically-based (i.e. global- warming potential or GWP and global temperature change potential or GTP) and socio-economically-based climate metrics. To this effect we use a simplified global damage potential (GDP) that was introduced by earlier authors and investigate the uncertainties in the methane CO2-equivalence that arise from physical and socio-economic factors. The median value of the methane GDP comes out very close to the widely used methane 100-yr GWP because of various compensating effects. However, there is a large spread in possible methane CO2-equivalences from this metric (1-99% interval: 10.0-42.5; 5-95% interval: 12.5-38.0) that is essentially due to the choice in some socio-economic parameters (i.e. the damage cost function and the discount rate). The main factor differentiating the methane 100-yr GTP from the methane 100-yr GWP and the GDP is the fact that the former metric is an end-point metric, whereas the latter are cumulative metrics. There is some rationale for an increase in the methane CO2-equivalence in the future as global warming unfolds, as implied by a convex damage function in the case of the GDP metric. We also show that a methane CO2-equivalence based on a pulse emission is sufficient to inform multi-year climate policies and emissions reductions, as long as there is enough visibility on CO2 prices and CO2-equivalences for the stakeholders.
Nichols, John W.; Hubbart, Jason A.; Poulton, Barry C.
2016-01-01
Characterizing the impacts of hydrologic alterations, pollutants, and habitat degradation on macroinvertebrate species assemblages is of critical value for managers wishing to categorize stream ecosystem condition. A combination of approaches including trait-based metrics and traditional bioassessments provides greater information, particularly in anthropogenic stream ecosystems where traditional approaches can be confounded by variously interacting land use impacts. Macroinvertebrates were collected from two rural and three urban nested study sites in central Missouri, USA during the spring and fall seasons of 2011. Land use responses of conventional taxonomic and trait-based metrics were compared to streamflow indices, physical habitat metrics, and water quality indices. Results show that biotic index was significantly different (p < 0.05) between sites with differences detected in 54 % of trait-based metrics. The most consistent response to urbanization was observed in size metrics, with significantly (p < 0.05) fewer small bodied organisms. Increases in fine streambed sediment, decreased submerged woody rootmats, significantly higher winter Chloride concentrations, and decreased mean suspended sediment particle size in lower urban stream reaches also influenced macroinvertebrate assemblages. Riffle habitats in urban reaches contained 21 % more (p = 0.03) multivoltine organisms, which was positively correlated to the magnitude of peak flows (r2 = 0.91, p = 0.012) suggesting that high flow events may serve as a disturbance in those areas. Results support the use of macroinvertebrate assemblages and multiple stressors to characterize urban stream system condition and highlight the need to better understand the complex interactions of trait-based metrics and anthropogenic aquatic ecosystem stressors.
Categorization of hyperspectral information (HSI) based on the distribution of spectra in hyperspace
NASA Astrophysics Data System (ADS)
Resmini, Ronald G.
2003-09-01
Hyperspectral information (HSI) data are commonly categorized by a description of the dominant physical geographic background captured in the image cube. In other words, HSI categorization is commonly based on a cursory, visual assessment of whether the data are of desert, forest, urban, littoral, jungle, alpine, etc., terrains. Additionally, often the design of HSI collection experiments is based on the acquisition of data of the various backgrounds or of objects of interest within the various terrain types. These data are for assessing and quantifying algorithm performance as well as for algorithm development activities. Here, results of an investigation into the validity of the backgrounds-driven mode of characterizing the diversity of hyperspectral data are presented. HSI data are described quantitatively, in the space where most algorithms operate: n-dimensional (n-D) hyperspace, where n is the number of bands in an HSI data cube. Nineteen metrics designed to probe hyperspace are applied to 14 HYDICE HSI data cubes that represent nine different backgrounds. Each of the 14 sets (one for each HYDICE cube) of 19 metric values was analyzed for clustering. With the present set of data and metrics, there is no clear, unambiguous break-out of metrics based on the nine different geographic backgrounds. The break-outs clump seemingly unrelated data types together; e.g., littoral and urban/residential. Most metrics are normally distributed and indicate no clustering; one metric is one outlier away from normal (i.e., two clusters); and five are comprised of two distributions (i.e., two clusters). Overall, there are three different break-outs that do not correspond to conventional background categories. Implications of these preliminary results are discussed as are recommendations for future work.
A reservoir morphology database for the conterminous United States
Rodgers, Kirk D.
2017-09-13
The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carisle; Griffin, John Clark
2015-01-01
The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less
The Metric System of Measurement (SI). Federal Register Notice of December 10, 1976.
ERIC Educational Resources Information Center
National Bureau of Standards (DOC), Washington, DC.
This document provides a diagram illustrating the relationships between base units in the metric system and derived units with special names. Twenty-one derived units are included. The base units used are: measures of mass, length, time, amount of substance, electric current, thermo-dynamic temperature, luminous intensity, and plane and solid…
Qian, Hong; Chen, Shengbin; Zhang, Jin-Long
2017-07-17
Niche-based and neutrality-based theories are two major classes of theories explaining the assembly mechanisms of local communities. Both theories have been frequently used to explain species diversity and composition in local communities but their relative importance remains unclear. Here, we analyzed 57 assemblages of angiosperm trees in 0.1-ha forest plots across China to examine the effects of environmental heterogeneity (relevant to niche-based processes) and spatial contingency (relevant to neutrality-based processes) on phylogenetic structure of angiosperm tree assemblages distributed across a wide range of environment and space. Phylogenetic structure was quantified with six phylogenetic metrics (i.e., phylogenetic diversity, mean pairwise distance, mean nearest taxon distance, and the standardized effect sizes of these three metrics), which emphasize on different depths of evolutionary histories and account for different degrees of species richness effects. Our results showed that the variation in phylogenetic metrics explained independently by environmental variables was on average much greater than that explained independently by spatial structure, and the vast majority of the variation in phylogenetic metrics was explained by spatially structured environmental variables. We conclude that niche-based processes have played a more important role than neutrality-based processes in driving phylogenetic structure of angiosperm tree species in forest communities in China.
Improving clinical models based on knowledge extracted from current datasets: a new approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Morais, J
2016-08-01
The Cardiovascular Diseases (CVD) are the leading cause of death in the world, being prevention recognized to be a key intervention able to contradict this reality. In this context, although there are several models and scores currently used in clinical practice to assess the risk of a new cardiovascular event, they present some limitations. The goal of this paper is to improve the CVD risk prediction taking into account the current models as well as information extracted from real and recent datasets. This approach is based on a decision tree scheme in order to assure the clinical interpretability of the model. An innovative optimization strategy is developed in order to adjust the decision tree thresholds (rule structure is fixed) based on recent clinical datasets. A real dataset collected in the ambit of the National Registry on Acute Coronary Syndromes, Portuguese Society of Cardiology is applied to validate this work. In order to assess the performance of the new approach, the metrics sensitivity, specificity and accuracy are used. This new approach achieves sensitivity, a specificity and an accuracy values of, 80.52%, 74.19% and 77.27% respectively, which represents an improvement of about 26% in relation to the accuracy of the original score.
2004-06-01
18 EBO Cognitive or Memetic input type ..................................................................... 18 Unanticipated EBO generated... Memetic Effects Based COA.................................................................................... 23 Policy...41 Belief systems or Memetic Content Metrics
Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.
2010-01-01
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981
Advanced Life Support Research and Technology Development Metric
NASA Technical Reports Server (NTRS)
Hanford, A. J.
2004-01-01
The Metric is one of several measures employed by the NASA to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2004. The values are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. For Fiscal Year 2004, the Advanced Life Support Research and Technology Development Metric value is 2.03 for an Orbiting Research Facility and 1.62 for an Independent Exploration Mission.
Double metric, generalized metric, and α' -deformed double field theory
NASA Astrophysics Data System (ADS)
Hohm, Olaf; Zwiebach, Barton
2016-03-01
We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.
Cognitive context detection in UAS operators using eye-gaze patterns on computer screens
NASA Astrophysics Data System (ADS)
Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph
2016-05-01
In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.
An objective method for a video quality evaluation in a 3DTV service
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2015-09-01
The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.
A meta-analysis of asbestos-related cancer risk that addresses fiber size and mineral type.
Berman, D Wayne; Crump, Kenny S
2008-01-01
Quantitative estimates of the risk of lung cancer or mesothelioma in humans from asbestos exposure made by the U.S. Environmental Protection Agency (EPA) make use of estimates of potency factors based on phase-contrast microscopy (PCM) and obtained from cohorts exposed to asbestos in different occupational environments. These potency factors exhibit substantial variability. The most likely reasons for this variability appear to be differences among environments in fiber size and mineralogy not accounted for by PCM. In this article, the U.S. Environmental Protection Agency (EPA) models for asbestos-related lung cancer and mesothelioma are expanded to allow the potency of fibers to depend upon their mineralogical types and sizes. This is accomplished by positing exposure metrics composed of nonoverlapping fiber categories and assigning each category its own unique potency. These category-specific potencies are estimated in a meta-analysis that fits the expanded models to potencies for lung cancer (KL's) or mesothelioma (KM's) based on PCM that were calculated for multiple epidemiological studies in our previous paper (Berman and Crump, 2008). Epidemiological study-specific estimates of exposures to fibers in the different fiber size categories of an exposure metric are estimated using distributions for fiber size based on transmission electron microscopy (TEM) obtained from the literature and matched to the individual epidemiological studies. The fraction of total asbestos exposure in a given environment respectively represented by chrysotile and amphibole asbestos is also estimated from information in the literature for that environment. Adequate information was found to allow KL's from 15 epidemiological studies and KM's from 11 studies to be included in the meta-analysis. Since the range of exposure metrics that could be considered was severely restricted by limitations in the published TEM fiber size distributions, it was decided to focus attention on four exposure metrics distinguished by fiber width: "all widths," widths > 0.2 micro m, widths < 0.4 microm, and widths < 0.2 microm, each of which has historical relevance. Each such metric defined by width was composed of four categories of fibers: chrysotile or amphibole asbestos with lengths between 5 microm and 10 microm or longer than 10 microm. Using these metrics three parameters were estimated for lung cancer and, separately, for mesothelioma: KLA, the potency of longer (length > 10 microm) amphibole fibers; rpc, the potency of pure chrysotile (uncontaminated by amphibole) relative to amphibole asbestos; and rps, the potency of shorter fibers (5 microm < length < 10 microm) relative to longer fibers. For mesothelioma, the hypothesis that chrysotile and amphibole asbestos are equally potent (rpc = 1) was strongly rejected by every metric and the hypothesis that (pure) chrysotile is nonpotent for mesothelioma was not rejected by any metric. Best estimates for the relative potency of chrysotile ranged from zero to about 1/200th that of amphibole asbestos (depending on metric). For lung cancer, the hypothesis that chrysotile and amphibole asbestos are equally potent (rpc = 1) was rejected (p < or = .05) by the two metrics based on thin fibers (length < 0.4 microm and < 0.2 microm) but not by the metrics based on thicker fibers. The "all widths" and widths < 0.4 microm metrics provide the best fits to both the lung cancer and mesothelioma data over the other metrics evaluated, although the improvements are only marginal for lung cancer. That these two metrics provide equivalent (for mesothelioma) and nearly equivalent (for lung cancer) fits to the data suggests that the available data sets may not be sufficiently rich (in variation of exposure characteristics) to fully evaluate the effects of fiber width on potency. Compared to the metric with widths > 0.2 microm with both rps and rpc fixed at 1 (which is nominally equivalent to the traditional PCM metric), the "all widths" and widths < 0.4 microm metrics provide substantially better fits for both lung cancer and, especially, mesothelioma. Although the best estimates of the potency of shorter fibers (5 < length < 10 microm) is zero for the "all widths" and widths < 0.4 microm metrics (or a small fraction of that of longer fibers for the widths > 0.2 microm metric for mesothelioma), the hypothesis that these shorter fibers were nonpotent could not be rejected for any of these metrics. Expansion of these metrics to include a category for fibers with lengths < 5 microm did not find any consistent evidence for any potency of these shortest fibers for either lung cancer or mesothelioma. Despite the substantial improvements in fit over that provided by the traditional use of PCM, neither the "all widths" nor the widths < 0.4 microm metrics (or any of the other metrics evaluated) completely resolve the differences in potency factors estimated in different occupational studies. Unresolved in particular is the discrepancy in potency factors for lung cancer from Quebec chrysotile miners and workers at the Charleston, SC, textile mill, which mainly processed chrysotile from Quebec. A leading hypothesis for this discrepancy is limitations in the fiber size distributions available for this analysis. Dement et al. (2007) recently analyzed by TEM archived air samples from the South Carolina plant to determine a detailed distribution of fiber lengths up to lengths of 40 microm and greater. If similar data become available for Quebec, perhaps these two size distributions can be used to eliminate the discrepancy between these two studies.
ERIC Educational Resources Information Center
Ribeiro, M. Gabriela T. C.; Yunes, Santiago F.; Machado, Adelio A. S. C.
2014-01-01
Two graphic holistic metrics for assessing the greenness of synthesis, the "green star" and the "green circle", have been presented previously. These metrics assess the greenness by the degree of accomplishment of each of the 12 principles of green chemistry that apply to the case under evaluation. The criteria for assessment…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummel, K.E.
1987-12-01
Expert systems are artificial intelligence programs that solve problems requiring large amounts of heuristic knowledge, based on years of experience and tradition. Production systems are domain-independent tools that support the development of rule-based expert systems. This document describes a general purpose production system known as HERB. This system was developed to support the programming of expert systems using hierarchically structured rule bases. HERB encourages the partitioning of rules into multiple rule bases and supports the use of multiple conflict resolution strategies. Multiple rule bases can also be placed on a system stack and simultaneously searched during each interpreter cycle. Bothmore » backward and forward chaining rules are supported by HERB. The condition portion of each rule can contain both patterns, which are matched with facts in a data base, and LISP expressions, which are explicitly evaluated in the LISP environment. Properties of objects can also be stored in the HERB data base and referenced within the scope of each rule. This document serves both as an introduction to the principles of LISP-based production systems and as a user's manual for the HERB system. 6 refs., 17 figs.« less
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery
Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack
2015-01-01
Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286
A New Metric for Land-Atmosphere Coupling Strength: Applications on Observations and Modeling
NASA Astrophysics Data System (ADS)
Tang, Q.; Xie, S.; Zhang, Y.; Phillips, T. J.; Santanello, J. A., Jr.; Cook, D. R.; Riihimaki, L.; Gaustad, K.
2017-12-01
A new metric is proposed to quantify the land-atmosphere (LA) coupling strength and is elaborated by correlating the surface evaporative fraction and impacting land and atmosphere variables (e.g., soil moisture, vegetation, and radiation). Based upon multiple linear regression, this approach simultaneously considers multiple factors and thus represents complex LA coupling mechanisms better than existing single variable metrics. The standardized regression coefficients quantify the relative contributions from individual drivers in a consistent manner, avoiding the potential inconsistency in relative influence of conventional metrics. Moreover, the unique expendable feature of the new method allows us to verify and explore potentially important coupling mechanisms. Our observation-based application of the new metric shows moderate coupling with large spatial variations at the U.S. Southern Great Plains. The relative importance of soil moisture vs. vegetation varies by location. We also show that LA coupling strength is generally underestimated by single variable methods due to their incompleteness. We also apply this new metric to evaluate the representation of LA coupling in the Accelerated Climate Modeling for Energy (ACME) V1 Contiguous United States (CONUS) regionally refined model (RRM). This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734201
Top Altmetric Scores in the Parkinson’s Disease Literature
Araújo, Rui; Sorensen, Aaron A.; Konkiel, Stacy; Bloem, Bastiaan R.
2017-01-01
A new class of social web-based metrics for scholarly publications (altmetrics) has surfaced as a complement to traditional citation-based metrics. Our aim was to study and characterize those recent papers in the field of Parkinson’s disease which had received the highest Altmetric Attention Scores and to compare this attention measure to the traditional metrics. The top 20 papers in our analysis covered a variety of topics, mainly new disease mechanisms, treatment options and risk factors for the development of PD. The main media sources for these high attention papers were news items and Twitter. The papers were published predominantly in high impact journals, suggesting a correlation between altmetrics and conventional metrics. One paper published in a relatively modest journal received a significant amount of attention, reflecting that public attention does not always parallel the traditional metrics. None of the most influential papers in PD, as reviewed by Ponce and Lozano (2011) made it to our list, suggesting that recent publications receive higher attention scores, and that altmetrics may omit older, seminal work in the field. PMID:28222540
Tilsen, Sam; Arvaniti, Amalia
2013-07-01
This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.
Turbulence Hazard Metric Based on Peak Accelerations for Jetliner Passengers
NASA Technical Reports Server (NTRS)
Stewart, Eric C.
2005-01-01
Calculations are made of the approximate hazard due to peak normal accelerations of an airplane flying through a simulated vertical wind field associated with a convective frontal system. The calculations are based on a hazard metric developed from a systematic application of a generic math model to 1-cosine discrete gusts of various amplitudes and gust lengths. The math model simulates the three degree-of- freedom longitudinal rigid body motion to vertical gusts and includes (1) fuselage flexibility, (2) the lag in the downwash from the wing to the tail, (3) gradual lift effects, (4) a simplified autopilot, and (5) motion of an unrestrained passenger in the rear cabin. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths. The airplane response contours are used to develop an approximate hazard metric of peak normal accelerations as a function of gust amplitude and gust length. The hazard metric is then applied to a two-dimensional simulated vertical wind field of a convective frontal system. The variations of the hazard metric with gust length and airplane heading are demonstrated.
Carr, Andrew R; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S; Mather, Michelle; Jimenez, Elvira E; Thompson, Paul; Mendez, Mario F
2015-11-01
Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer's disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules. Published by Elsevier Ltd.
Carr, Andrew R.; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S.; Mather, Michelle; Jimenez, Elvira E.; Thompson, Paul; Mendez, Mario F.
2015-01-01
Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer’s disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules. PMID:26432341
NASA Astrophysics Data System (ADS)
Shrigley, Robert L.
This study was based on Hovland's four-part statement, Who says what to whom with what effect, the rationale for persuasive communication, a theoretical model for modifying attitudes. Part I was a survey of 139 perservice elementary teachers from which were generated the more credible characteristics of metric instructors, a central element in the who component of Hovland's model. They were: (1) background in mathematics and science, (2) fluency in metrics, (3) capability of thinking metrically, (4) a record of excellent teaching, (5) previous teaching of metric measurement to children, (6) responsibility for teaching metric content in methods courses and (7) an open enthusiasm for metric conversion. Part II was a survey of 45 mathematics educators where belief statements were synthesized for the what component of Hovland's model. It found that math educators support metric measurement because: (1) it is consistent with our monetary system; (2) the conversion of units is easier into metric than English; (3) it is easier to teach and easier to learn than English measurement; there is less need for common fractions; (4) most nations use metric measurement; scientists have used it for decades; (5) American industry has begun to use it; (6) metric measurement will facilitate world trade and communication; and (7) American children will need it as adults; educational agencies are mandating it. With the who and what of Hovland's four-part statement defined, educational researchers now have baseline data to use in testing experimentally the effect of persuasive communication on the attitude of preservice teachers toward metrication.
Metric analysis of basal sphenoid angle in adult human skulls
Netto, Dante Simionato; Nascimento, Sergio Ricardo Rios; Ruiz, Cristiane Regina
2014-01-01
Objective To analyze the variations in the angle basal sphenoid skulls of adult humans and their relationship to sex, age, ethnicity and cranial index. Methods The angles were measured in 160 skulls belonging to the Museum of the Universidade Federal de São Paulo Department of Morphology. We use two flexible rules and a goniometer, having as reference points for the first rule the posterior end of the ethmoidal crest and dorsum of the sella turcica, and for the second rule the anterior margin of the foramen magnum and clivus, measuring the angle at the intersection of two. Results The average angle was 115.41°, with no statistical correlation between the value of the angle and sex or age. A statistical correlation was noted between the value of the angle and ethnicity, and between the angle and the horizontal cranial index. Conclusions The distribution of the angle basal sphenoid was the same in sex, and there was correlation between the angle and ethnicity, being the proportion of non-white individuals with an angle >125° significantly higher than that of whites with an angle >125°. There was correlation between the angle and the cranial index, because skulls with higher cranial index tend to have higher basiesfenoidal angle too. PMID:25295452
A guide to phylogenetic metrics for conservation, community ecology and macroecology.
Tucker, Caroline M; Cadotte, Marc W; Carvalho, Silvia B; Davies, T Jonathan; Ferrier, Simon; Fritz, Susanne A; Grenyer, Rich; Helmus, Matthew R; Jin, Lanna S; Mooers, Arne O; Pavoine, Sandrine; Purschke, Oliver; Redding, David W; Rosauer, Dan F; Winter, Marten; Mazel, Florent
2017-05-01
The use of phylogenies in ecology is increasingly common and has broadened our understanding of biological diversity. Ecological sub-disciplines, particularly conservation, community ecology and macroecology, all recognize the value of evolutionary relationships but the resulting development of phylogenetic approaches has led to a proliferation of phylogenetic diversity metrics. The use of many metrics across the sub-disciplines hampers potential meta-analyses, syntheses, and generalizations of existing results. Further, there is no guide for selecting the appropriate metric for a given question, and different metrics are frequently used to address similar questions. To improve the choice, application, and interpretation of phylo-diversity metrics, we organize existing metrics by expanding on a unifying framework for phylogenetic information. Generally, questions about phylogenetic relationships within or between assemblages tend to ask three types of question: how much; how different; or how regular? We show that these questions reflect three dimensions of a phylogenetic tree: richness, divergence, and regularity. We classify 70 existing phylo-diversity metrics based on their mathematical form within these three dimensions and identify 'anchor' representatives: for α-diversity metrics these are PD (Faith's phylogenetic diversity), MPD (mean pairwise distance), and VPD (variation of pairwise distances). By analysing mathematical formulae and using simulations, we use this framework to identify metrics that mix dimensions, and we provide a guide to choosing and using the most appropriate metrics. We show that metric choice requires connecting the research question with the correct dimension of the framework and that there are logical approaches to selecting and interpreting metrics. The guide outlined herein will help researchers navigate the current jungle of indices. © 2016 The Authors. Biological Reviews published by John Wiley © Sons Ltd on behalf of Cambridge Philosophical Society.