Sample records for test coverage metrics

  1. Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness

    NASA Technical Reports Server (NTRS)

    Staats, Matt; Whalen, Michael W.; Heindahl, Mats P. E.; Rajan, Ajitha

    2010-01-01

    In black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a black-box test suite with respect to a set of requirements formalized as Linear Temporal Logic (LTL) properties. In this report, we evaluate the effectiveness of these coverage metrics with respect to fault finding. Specifically, we conduct an empirical study to investigate two questions: (1) do test suites satisfying a requirements coverage metric provide better fault finding than randomly generated test suites of approximately the same size?, and (2) do test suites satisfying a more rigorous requirements coverage metric provide better fault finding than test suites satisfying a less rigorous requirements coverage metric? Our results indicate (1) only one coverage metric proposed -- Unique First Cause (UFC) coverage -- is sufficiently rigorous to ensure test suites satisfying the metric outperform randomly generated test suites of similar size and (2) that test suites satisfying more rigorous coverage metrics provide better fault finding than test suites satisfying less rigorous coverage metrics.

  2. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  3. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  4. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  5. Analyzing the test process using structural coverage

    NASA Technical Reports Server (NTRS)

    Ramsey, James; Basili, Victor R.

    1985-01-01

    A large, commercially developed FORTRAN program was modified to produce structural coverage metrics. The modified program was executed on a set of functionally generated acceptance tests and a large sample of operational usage cases. The resulting structural coverage metrics are combined with fault and error data to evaluate structural coverage. It was shown that in the software environment the functionally generated tests seem to be a good approximation of operational use. The relative proportions of the exercised statement subclasses change as the structural coverage of the program increases. A method was also proposed for evaluating if two sets of input data exercise a program in a similar manner. Evidence was provided that implies that in this environment, faults revealed in a procedure are independent of the number of times the procedure is executed and that it may be reasonable to use procedure coverage in software models that use statement coverage. Finally, the evidence suggests that it may be possible to use structural coverage to aid in the management of the acceptance test processed.

  6. A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    NASA Technical Reports Server (NTRS)

    Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela

    2015-01-01

    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information

  7. Agile deployment and code coverage testing metrics of the boot software on-board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián

    2018-02-01

    In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.

  8. Exploring the relationship between population density and maternal health coverage.

    PubMed

    Hanlon, Michael; Burstein, Roy; Masters, Samuel H; Zhang, Raymond

    2012-11-21

    Delivering health services to dense populations is more practical than to dispersed populations, other factors constant. This engenders the hypothesis that population density positively affects coverage rates of health services. This hypothesis has been tested indirectly for some services at a local level, but not at a national level. We use cross-sectional data to conduct cross-country, OLS regressions at the national level to estimate the relationship between population density and maternal health coverage. We separately estimate the effect of two measures of density on three population-level coverage rates (6 tests in total). Our coverage indicators are the fraction of the maternal population completing four antenatal care visits and the utilization rates of both skilled birth attendants and in-facility delivery. The first density metric we use is the percentage of a population living in an urban area. The second metric, which we denote as a density score, is a relative ranking of countries by population density. The score's calculation discounts a nation's uninhabited territory under the assumption those areas are irrelevant to service delivery. We find significantly positive relationships between our maternal health indicators and density measures. On average, a one-unit increase in our density score is equivalent to a 0.2% increase in coverage rates. Countries with dispersed populations face higher burdens to achieve multinational coverage targets such as the United Nations' Millennial Development Goals.

  9. Exploring the relationship between population density and maternal health coverage

    PubMed Central

    2012-01-01

    Background Delivering health services to dense populations is more practical than to dispersed populations, other factors constant. This engenders the hypothesis that population density positively affects coverage rates of health services. This hypothesis has been tested indirectly for some services at a local level, but not at a national level. Methods We use cross-sectional data to conduct cross-country, OLS regressions at the national level to estimate the relationship between population density and maternal health coverage. We separately estimate the effect of two measures of density on three population-level coverage rates (6 tests in total). Our coverage indicators are the fraction of the maternal population completing four antenatal care visits and the utilization rates of both skilled birth attendants and in-facility delivery. The first density metric we use is the percentage of a population living in an urban area. The second metric, which we denote as a density score, is a relative ranking of countries by population density. The score’s calculation discounts a nation’s uninhabited territory under the assumption those areas are irrelevant to service delivery. Results We find significantly positive relationships between our maternal health indicators and density measures. On average, a one-unit increase in our density score is equivalent to a 0.2% increase in coverage rates. Conclusions Countries with dispersed populations face higher burdens to achieve multinational coverage targets such as the United Nations’ Millennial Development Goals. PMID:23170895

  10. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  11. Automated Generation and Assessment of Autonomous Systems Test Cases

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.

  12. Technical Note: Using k-means clustering to determine the number and position of isocenters in MLC-based multiple target intracranial radiosurgery.

    PubMed

    Yock, Adam D; Kim, Gwe-Ya

    2017-09-01

    To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. New Quality Metrics for Web Search Results

    NASA Astrophysics Data System (ADS)

    Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni

    Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.

  14. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    PubMed

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p < 0.0001). Gradient measures strongly correlated with target volume (p < 0.0001). The RTOG lung SBRT protocol advocated conformity guidelines for prescribed dose in all categories were met in ≥94% of cases. The proportion of total lung volume receiving doses of 20 Gy and 5 Gy (V 20 and V 5 ) were mean 4.8% (±3.2) and 16.4% (±9.2), respectively. Based on our study analyses, we recommend the following metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  15. A parametric study of rate of advance and area coverage rate performance of synthetic aperture radar.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raynal, Ann Marie; William H. Hensley, Jr.; Burns, Bryan L.

    2014-11-01

    The linear ground distance per unit time and ground area covered per unit time of producing synthetic aperture radar (SAR) imagery, termed rate of advance (ROA) and area coverage rate (ACR), are important metrics for platform and radar performance in surveillance applications. These metrics depend on many parameters of a SAR system such as wavelength, aircraft velocity, resolution, antenna beamwidth, imaging mode, and geometry. Often the effects of these parameters on rate of advance and area coverage rate are non-linear. This report addresses the impact of different parameter spaces as they relate to rate of advance and area coverage ratemore » performance.« less

  16. Sub-national variation in measles vaccine coverage and outbreak risk: a case study from a 2010 outbreak in Malawi.

    PubMed

    Kundrick, Avery; Huang, Zhuojie; Carran, Spencer; Kagoli, Matthew; Grais, Rebecca Freeman; Hurtado, Northan; Ferrari, Matthew

    2018-06-15

    Despite progress towards increasing global vaccination coverage, measles continues to be one of the leading, preventable causes of death among children worldwide. Whether and how to target sub-national areas for vaccination campaigns continues to remain a question. We analyzed three metrics for prioritizing target areas: vaccination coverage, susceptible birth cohort, and the effective reproductive ratio (R E ) in the context of the 2010 measles epidemic in Malawi. Using case-based surveillance data from the 2010 measles outbreak in Malawi, we estimated vaccination coverage from the proportion of cases reporting with a history of prior vaccination at the district and health facility catchment scale. Health facility catchments were defined as the set of locations closer to a given health facility than to any other. We combined these estimates with regional birth rates to estimate the size of the annual susceptible birth cohort. We also estimated the effective reproductive ratio, R E , at the health facility polygon scale based on the observed rate of exponential increase of the epidemic. We combined these estimates to identify spatial regions that would be of high priority for supplemental vaccination activities. The estimated vaccination coverage across all districts was 84%, but ranged from 61 to 99%. We found that 8 districts and 354 health facility catchments had estimated vaccination coverage below 80%. Areas that had highest birth cohort size were frequently large urban centers that had high vaccination coverage. The estimated R E ranged between 1 and 2.56. The ranking of districts and health facility catchments as priority areas varied depending on the measure used. Each metric for prioritization may result in discrete target areas for vaccination campaigns; thus, there are tradeoffs to choosing one metric over another. However, in some cases, certain areas may be prioritized by all three metrics. These areas should be treated with particular concern. Furthermore, the spatial scale at which each metric is calculated impacts the resulting prioritization and should also be considered when prioritizing areas for vaccination campaigns. These methods may be used to allocate effort for prophylactic campaigns or to prioritize response for outbreak response vaccination.

  17. COMPOSITIONAL LANDSCAPE METRICS AND LANDCOVER CONNECTIVITY MEASURES FOR THE SUB-WATERSHEDS OF THE UPPER SAN PEDRO RIVER 1997

    EPA Science Inventory

    Various compositional landscape metrics and landcover connectivity measures for the sub-watersheds of the Upper San Pedro River. Metrics were computed using the ATtILA v.3.03 ArcView extension. Inputs included the sub-watershed coverage obtained from the USDA-ARS-SWRC in Tucson,...

  18. COMPOSITIONAL LANDSCAPE METRICS AND LANDCOVER CONNECTIVITY MEASURES FOR THE SUB-WATERSHEDS OF THE UPPER SAN PEDRO RIVER 1973

    EPA Science Inventory

    Various compositional landscape metrics and landcover connectivity measures for the sub-watersheds of the Upper San Pedro River. Metrics were computed using the ATtILA v3.03 ArcView extension. Inputs included the sub-watershed coverage obtained from the USDA-ARS-SWRC in Tucson, A...

  19. Nonpareil 3: Fast Estimation of Metagenomic Coverage and Sequence Diversity.

    PubMed

    Rodriguez-R, Luis M; Gunturu, Santosh; Tiedje, James M; Cole, James R; Konstantinidis, Konstantinos T

    2018-01-01

    Estimations of microbial community diversity based on metagenomic data sets are affected, often to an unknown degree, by biases derived from insufficient coverage and reference database-dependent estimations of diversity. For instance, the completeness of reference databases cannot be generally estimated since it depends on the extant diversity sampled to date, which, with the exception of a few habitats such as the human gut, remains severely undersampled. Further, estimation of the degree of coverage of a microbial community by a metagenomic data set is prohibitively time-consuming for large data sets, and coverage values may not be directly comparable between data sets obtained with different sequencing technologies. Here, we extend Nonpareil, a database-independent tool for the estimation of coverage in metagenomic data sets, to a high-performance computing implementation that scales up to hundreds of cores and includes, in addition, a k -mer-based estimation as sensitive as the original alignment-based version but about three hundred times as fast. Further, we propose a metric of sequence diversity ( N d ) derived directly from Nonpareil curves that correlates well with alpha diversity assessed by traditional metrics. We use this metric in different experiments demonstrating the correlation with the Shannon index estimated on 16S rRNA gene profiles and show that N d additionally reveals seasonal patterns in marine samples that are not captured by the Shannon index and more precise rankings of the magnitude of diversity of microbial communities in different habitats. Therefore, the new version of Nonpareil, called Nonpareil 3, advances the toolbox for metagenomic analyses of microbiomes. IMPORTANCE Estimation of the coverage provided by a metagenomic data set, i.e., what fraction of the microbial community was sampled by DNA sequencing, represents an essential first step of every culture-independent genomic study that aims to robustly assess the sequence diversity present in a sample. However, estimation of coverage remains elusive because of several technical limitations associated with high computational requirements and limiting statistical approaches to quantify diversity. Here we described Nonpareil 3, a new bioinformatics algorithm that circumvents several of these limitations and thus can facilitate culture-independent studies in clinical or environmental settings, independent of the sequencing platform employed. In addition, we present a new metric of sequence diversity based on rarefied coverage and demonstrate its use in communities from diverse ecosystems.

  20. Estimating Landscape Pattern Metrics from a Sample of Land Cover

    EPA Science Inventory

    Although landscape pattern metrics can be computed directly from wall-to-wall land-cover maps, statistical sampling offers a practical alternative when complete coverage land-cover information is unavailable. Partitioning a region into spatial units (“blocks”) to create a samplin...

  1. Comparing de novo genome assembly: the long and short of it.

    PubMed

    Narzisi, Giuseppe; Mishra, Bud

    2011-04-29

    Recent advances in DNA sequencing technology and their focal role in Genome Wide Association Studies (GWAS) have rekindled a growing interest in the whole-genome sequence assembly (WGSA) problem, thereby, inundating the field with a plethora of new formalizations, algorithms, heuristics and implementations. And yet, scant attention has been paid to comparative assessments of these assemblers' quality and accuracy. No commonly accepted and standardized method for comparison exists yet. Even worse, widely used metrics to compare the assembled sequences emphasize only size, poorly capturing the contig quality and accuracy. This paper addresses these concerns: it highlights common anomalies in assembly accuracy through a rigorous study of several assemblers, compared under both standard metrics (N50, coverage, contig sizes, etc.) as well as a more comprehensive metric (Feature-Response Curves, FRC) that is introduced here; FRC transparently captures the trade-offs between contigs' quality against their sizes. For this purpose, most of the publicly available major sequence assemblers--both for low-coverage long (Sanger) and high-coverage short (Illumina) reads technologies--are compared. These assemblers are applied to microbial (Escherichia coli, Brucella, Wolbachia, Staphylococcus, Helicobacter) and partial human genome sequences (Chr. Y), using sequence reads of various read-lengths, coverages, accuracies, and with and without mate-pairs. It is hoped that, based on these evaluations, computational biologists will identify innovative sequence assembly paradigms, bioinformaticists will determine promising approaches for developing "next-generation" assemblers, and biotechnologists will formulate more meaningful design desiderata for sequencing technology platforms. A new software tool for computing the FRC metric has been developed and is available through the AMOS open-source consortium.

  2. Simulation environment based on the Universal Verification Methodology

    NASA Astrophysics Data System (ADS)

    Fiergolski, A.

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.

  3. Estimation of the fraction of absorbed photosynthetically active radiation (fPAR) in maize canopies using LiDAR data and hyperspectral imagery.

    PubMed

    Qin, Haiming; Wang, Cheng; Zhao, Kaiguang; Xi, Xiaohuan

    2018-01-01

    Accurate estimation of the fraction of absorbed photosynthetically active radiation (fPAR) for maize canopies are important for maize growth monitoring and yield estimation. The goal of this study is to explore the potential of using airborne LiDAR and hyperspectral data to better estimate maize fPAR. This study focuses on estimating maize fPAR from (1) height and coverage metrics derived from airborne LiDAR point cloud data; (2) vegetation indices derived from hyperspectral imagery; and (3) a combination of these metrics. Pearson correlation analyses were conducted to evaluate the relationships among LiDAR metrics, hyperspectral metrics, and field-measured fPAR values. Then, multiple linear regression (MLR) models were developed using these metrics. Results showed that (1) LiDAR height and coverage metrics provided good explanatory power (i.e., R2 = 0.81); (2) hyperspectral vegetation indices provided moderate interpretability (i.e., R2 = 0.50); and (3) the combination of LiDAR metrics and hyperspectral metrics improved the LiDAR model (i.e., R2 = 0.88). These results indicate that LiDAR model seems to offer a reliable method for estimating maize fPAR at a high spatial resolution and it can be used for farmland management. Combining LiDAR and hyperspectral metrics led to better performance of maize fPAR estimation than LiDAR or hyperspectral metrics alone, which means that maize fPAR retrieval can benefit from the complementary nature of LiDAR-detected canopy structure characteristics and hyperspectral-captured vegetation spectral information.

  4. Role of quality of service metrics in visual target acquisition and tracking in resource constrained environments

    NASA Astrophysics Data System (ADS)

    Anderson, Monica; David, Phillip

    2007-04-01

    Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.

  5. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  6. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786

  7. Vehicle Integrated Prognostic Reasoner (VIPR) Metric Report

    NASA Technical Reports Server (NTRS)

    Cornhill, Dennis; Bharadwaj, Raj; Mylaraswamy, Dinkar

    2013-01-01

    This document outlines a set of metrics for evaluating the diagnostic and prognostic schemes developed for the Vehicle Integrated Prognostic Reasoner (VIPR), a system-level reasoner that encompasses the multiple levels of large, complex systems such as those for aircraft and spacecraft. VIPR health managers are organized hierarchically and operate together to derive diagnostic and prognostic inferences from symptoms and conditions reported by a set of diagnostic and prognostic monitors. For layered reasoners such as VIPR, the overall performance cannot be evaluated by metrics solely directed toward timely detection and accuracy of estimation of the faults in individual components. Among other factors, overall vehicle reasoner performance is governed by the effectiveness of the communication schemes between monitors and reasoners in the architecture, and the ability to propagate and fuse relevant information to make accurate, consistent, and timely predictions at different levels of the reasoner hierarchy. We outline an extended set of diagnostic and prognostics metrics that can be broadly categorized as evaluation measures for diagnostic coverage, prognostic coverage, accuracy of inferences, latency in making inferences, computational cost, and sensitivity to different fault and degradation conditions. We report metrics from Monte Carlo experiments using two variations of an aircraft reference model that supported both flat and hierarchical reasoning.

  8. Urbanization reduces and homogenizes trait diversity in stream macroinvertebrate communities.

    PubMed

    Barnum, Thomas R; Weller, Donald E; Williams, Meghan

    2017-12-01

    More than one-half of the world's population lives in urban areas, so quantifying the effects of urbanization on ecological communities is important for understanding whether anthropogenic stressors homogenize communities across environmental and climatic gradients. We examined the relationship of impervious surface coverage (a marker of urbanization) and the structure of stream macroinvertebrate communities across the state of Maryland and within each of Maryland's three ecoregions: Coastal Plain, Piedmont, and Appalachian, which differ in stream geomorphology and community composition. We considered three levels of trait organization: individual traits, unique combinations of traits, and community metrics (functional richness, functional evenness, and functional divergence) and three levels of impervious surface coverage (low [<2.5%], medium [2.5% to 10%], and high [>10%]). The prevalence of an individual trait differed very little between low impervious surface and high impervious surface sites. The arrangement of trait combinations in community trait space for each ecoregion differed when impervious surface coverage was low, but the arrangement became more similar among ecoregions as impervious surface coverage increased. Furthermore, trait combinations that occurred only at low or medium impervious surface coverage were clustered in a subset of the community trait space, indicating that impervious surface affected the presence of only a subset of trait combinations. Functional richness declined with increasing impervious surface, providing evidence for environmental filtering. Community metrics that include abundance were also sensitive to increasing impervious surface coverage: functional divergence decreased while functional evenness increased. These changes demonstrate that increasing impervious surface coverage homogenizes the trait diversity of macroinvertebrate communities in streams, despite differences in initial community composition and stream geomorphology among ecoregions. Community metrics were also more sensitive to changes in the abundance rather than the gain or loss of trait combinations, showing the potential for trait-based approaches to serve as early warning indicators of environmental stress for monitoring and biological assessment programs. © 2017 by the Ecological Society of America.

  9. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  10. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  11. A New Way to Measure the World's Protected Area Coverage

    PubMed Central

    Barr, Lissa M.; Pressey, Robert L.; Fuller, Richard A.; Segan, Daniel B.; McDonald-Madden, Eve; Possingham, Hugh P.

    2011-01-01

    Protected areas are effective at stopping biodiversity loss, but their placement is constrained by the needs of people. Consequently protected areas are often biased toward areas that are unattractive for other human uses. Current reporting metrics that emphasise the total area protected do not account for this bias. To address this problem we propose that the distribution of protected areas be evaluated with an economic metric used to quantify inequality in income— the Gini coefficient. Using a modified version of this measure we discover that 73% of countries have inequitably protected their biodiversity and that common measures of protected area coverage do not adequately reveal this bias. Used in combination with total percentage protection, the Gini coefficient will improve the effectiveness of reporting on the growth of protected area coverage, paving the way for better representation of the world's biodiversity. PMID:21957458

  12. Performance of rapid test kits to assess household coverage of iodized salt.

    PubMed

    Gorstein, Jonathan; van der Haar, Frits; Codling, Karen; Houston, Robin; Knowles, Jacky; Timmer, Arnold

    2016-10-01

    The main indicator adopted to track universal salt iodization has been the coverage of adequately iodized salt in households. Rapid test kits (RTK) have been included in household surveys to test the iodine content in salt. However, laboratory studies of their performance have concluded that RTK are reliable only to distinguish between the presence and absence of iodine in salt, but not to determine whether salt is adequately iodized. The aim of the current paper was to examine the performance of RTK under field conditions and to recommend their most appropriate use in household surveys. Standard performance characteristics of the ability of RTK to detect the iodine content in salt at 0 mg/kg (salt with no iodine), 5 mg/kg (salt with any added iodine) and 15 mg/kg ('adequately' iodized salt) were calculated. Our analysis employed the agreement rate (AR) as a preferred metric of RTK performance. Setting/Subjects Twenty-five data sets from eighteen population surveys which assessed household iodized salt by both the RTK and a quantitative method (i.e. titration or WYD Checker) were obtained from Asian (nineteen data sets), African (five) and European (one) countries. In detecting iodine in salt at 0 mg/kg, the RTK had an AR>90 % in eight of twenty-three surveys, while eight surveys had an AR90 %. The RTK is not suited for assessment of adequately iodized salt coverage. Quantitative assessment, such as by titration or WYD Checker, is necessary for estimates of adequately iodized salt coverage.

  13. Universal health coverage in Rwanda: dream or reality.

    PubMed

    Nyandekwe, Médard; Nzayirambaho, Manassé; Baptiste Kakoma, Jean

    2014-01-01

    Universal Health Coverage (UHC) has been a global concern for a long time and even more nowadays. While a number of publications are almost unanimous that Rwanda is not far from UHC, very few have focused on its financial sustainability and on its extreme external financial dependency. The objectives of this study are: (i) To assess Rwanda UHC based mainly on Community-Based Health Insurance (CBHI) from 2000 to 2012; (ii) to inform policy makers about observed gaps for a better way forward. A retrospective (2000-2012) SWOT analysis was applied to six metrics as key indicators of UHC achievement related to WHO definition, i.e. (i) health insurance and access to care, (ii) equity, (iii) package of services, (iv) rights-based approach, (v) quality of health care, (vi) financial-risk protection, and (vii) CBHI self-financing capacity (SFC) was added by the authors. The first metric with 96,15% of overall health insurance coverage and 1.07 visit per capita per year versus 1 visit recommended by WHO, the second with 24,8% indigent people subsidized versus 24,1% living in extreme poverty, the third, the fourth, and the fifth metrics excellently performing, the sixth with 10.80% versus ≤40% as limit acceptable of catastrophic health spending level and lastly the CBHI SFC i.e. proper cost recovery estimated at 82.55% in 2011/2012, Rwanda UHC achievements are objectively convincing. Rwanda UHC is not a dream but a reality if we consider all convincing results issued of the seven metrics.

  14. Universal health coverage in Rwanda: dream or reality

    PubMed Central

    Nyandekwe, Médard; Nzayirambaho, Manassé; Baptiste Kakoma, Jean

    2014-01-01

    Introduction Universal Health Coverage (UHC) has been a global concern for a long time and even more nowadays. While a number of publications are almost unanimous that Rwanda is not far from UHC, very few have focused on its financial sustainability and on its extreme external financial dependency. The objectives of this study are: (i) To assess Rwanda UHC based mainly on Community-Based Health Insurance (CBHI) from 2000 to 2012; (ii) to inform policy makers about observed gaps for a better way forward. Methods A retrospective (2000-2012) SWOT analysis was applied to six metrics as key indicators of UHC achievement related to WHO definition, i.e. (i) health insurance and access to care, (ii) equity, (iii) package of services, (iv) rights-based approach, (v) quality of health care, (vi) financial-risk protection, and (vii) CBHI self-financing capacity (SFC) was added by the authors. Results The first metric with 96,15% of overall health insurance coverage and 1.07 visit per capita per year versus 1 visit recommended by WHO, the second with 24,8% indigent people subsidized versus 24,1% living in extreme poverty, the third, the fourth, and the fifth metrics excellently performing, the sixth with 10.80% versus ≤40% as limit acceptable of catastrophic health spending level and lastly the CBHI SFC i.e. proper cost recovery estimated at 82.55% in 2011/2012, Rwanda UHC achievements are objectively convincing. Conclusion Rwanda UHC is not a dream but a reality if we consider all convincing results issued of the seven metrics. PMID:25170376

  15. An enhanced TIMESAT algorithm for estimating vegetation phenology metrics from MODIS data

    USGS Publications Warehouse

    Tan, B.; Morisette, J.T.; Wolfe, R.E.; Gao, F.; Ederer, G.A.; Nightingale, J.; Pedelty, J.A.

    2011-01-01

    An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates. ?? 2010 IEEE.

  16. An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.

    2012-01-01

    An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.

  17. Global discrimination of land cover types from metrics derived from AVHRR pathfinder data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeFries, R.; Hansen, M.; Townshend, J.

    1995-12-01

    Global data sets of land cover are a significant requirement for global biogeochemical and climate models. Remotely sensed satellite data is an increasingly attractive source for deriving these data sets due to the resulting internal consistency, reproducibility, and coverage in locations where ground knowledge is sparse. Seasonal changes in the greenness of vegetation, described in remotely sensed data as changes in the normalized difference vegetation index (NDVI) throughout the year, have been the basis for discriminating between cover types in previous attempts to derive land cover from AVHRR data at global and continental scales. This study examines the use ofmore » metrics derived from the NDVI temporal profile, as well as metrics derived from observations in red, infrared, and thermal bands, to improve discrimination between 12 cover types on a global scale. According to separability measures calculated from Bhattacharya distances, average separabilities improved by using 12 of the 16 metrics tested (1.97) compared to separabilities using 12 monthly NDVI values alone (1.88). Overall, the most robust metrics for discriminating between cover types were: mean NDVI, maximum NDVI, NDVI amplitude, AVHRR Band 2 (near-infrared reflectance) and Band 1 (red reflectance) corresponding to the time of maximum NDVI, and maximum land surface temperature. Deciduous and evergreen vegetation can be distinguished by mean NDVI, maximum NDVI, NDVI amplitude, and maximum land surface temperature. Needleleaf and broadleaf vegetation can be distinguished by either mean NDVI and NDVI amplitude or maximum NDVI and NDVI amplitude.« less

  18. Radiology 24/7 In-House Attending Coverage: Do Benefits Outweigh Cost?

    PubMed

    Coleman, Stephanie; Holalkere, Nagaraj Setty; O׳Malley, Julie; Doherty, Gemma; Norbash, Alexander; Kadom, Nadja

    2016-01-01

    Many radiology practices, including academic centers, are moving to in-house 24/7 attending coverage. This could be costly and may not be easily accepted by radiology trainees and attending radiologists. In this article, we evaluated the effects of 24/7 in-house attending coverage on patient care, costs, and qualitative aspects such as trainee education. We retrospectively collected report turnaround times (TAT) and work relative value units (wRVU). We compared these parameters between the years before and after the implementation of 24/7 in-house attending coverage. The cost to provide additional attending coverage was estimated from departmental financial reports. A qualitative survey of radiology residents and faculty was performed to study perceived effects on trainee education. There were decreases in report TAT following 24/7 attending implementation: 69% reduction in computed tomography, 43% reduction in diagnostic radiography, 7% reduction in magnetic resonance imaging, and 43% reduction in ultrasound. There was an average daytime wRVU decrease of 9%, although this was compounded by a decrease in total RVUs of the 2013 calendar year. The financial investment by the institution was estimated at $850,000. Qualitative data demonstrated overall positive feedback from trainees and faculty in radiology, although loss of independence was reported as a negative effect. TAT and wRVU metrics changed with implementation of 24/7 attending coverage, although these metrics do not directly relate to patient outcomes. Additional clinical benefits may include fewer discrepancies between preliminary and final reports that may improve emergency and inpatient department workflows and liability exposure. Radiologists reported the impression that clinicians appreciated 24/7 in-house attending coverage, particularly surgical specialists. Loss of trainee independence on call was a perceived disadvantage of 24/7 attending coverage and raised a concern that residency education outcomes could be adversely affected. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Mapping fractional woody cover in semi-arid savannahs using multi-seasonal composites from Landsat data

    NASA Astrophysics Data System (ADS)

    Higginbottom, Thomas P.; Symeonakis, Elias; Meyer, Hanna; van der Linden, Sebastian

    2018-05-01

    Increasing attention is being directed at mapping the fractional woody cover of savannahs using Earth-observation data. In this study, we test the utility of Landsat TM/ ETM-based spectral-temporal variability metrics for mapping regional-scale woody cover in the Limpopo Province of South Africa, for 2010. We employ a machine learning framework to compare the accuracies of Random Forest models derived using metrics calculated from different seasons. We compare these results to those from fused Landsat-PALSAR data to establish if seasonal metrics can compensate for structural information from the PALSAR signal. Furthermore, we test the applicability of a statistical variable selection method, the recursive feature elimination (RFE), in the automation of the model building process in order to reduce model complexity and processing time. All of our tests were repeated at four scales (30, 60, 90, and 120 m-pixels) to investigate the role of spatial resolution on modelled accuracies. Our results show that multi-seasonal composites combining imagery from both the dry and wet seasons produced the highest accuracies (R2 = 0.77, RMSE = 9.4, at the 120 m scale). When using a single season of observations, dry season imagery performed best (R2 = 0.74, RMSE = 9.9, at the 120 m resolution). Combining Landsat and radar imagery was only marginally beneficial, offering a mean relative improvement of 1% in accuracy at the 120 m scale. However, this improvement was concentrated in areas with lower densities of woody coverage (<30%), which are areas of concern for environmental monitoring. At finer spatial resolutions, the inclusion of SAR data actually reduced accuracies. Overall, the RFE was able to produce the most accurate model (R2 = 0.8, RMSE = 8.9, at the 120 m pixel scale). For mapping savannah woody cover at the 30 m pixel scale, we suggest that monitoring methodologies continue to exploit the Landsat archive, but should aim to use multi-seasonal derived information. When the coarser 120 m pixel scale is adequate, integration of Landsat and SAR data should be considered, especially in areas with lower woody cover densities. The use of multiple seasonal compositing periods offers promise for large-area mapping of savannahs, even in regions with a limited historical Landsat coverage.

  20. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  1. [Effective coverage to manage domestic violence against women in Mexican municipalities: limits of metrics].

    PubMed

    Viviescas-Vargas, Diana P; Idrovo, Alvaro Javier; López-López, Erika; Uicab-Pool, Gloria; Herrera-Trujillo, Mónica; Balam-Gómez, Maricela; Hidalgo-Solórzano, Elisa

    2013-08-01

    The study estimated the effective coverage of health services in primary care for the management of domestic violence against women in three municipalities in Mexico. We estimated the prevalence and severity of violence using a validated scale, and the effective coverage proposed by Shengelia and partners with any modifications. Quality care was considered when there was a suggestion to report it to authorities. The use and quality of care was low in the three municipalities analyzed, used most frequently when there was sexual or physical violence. Effective coverage was 29.41%, 16.67% and zero in Guachochi, Jojutla and Tizimín, respectively. The effective coverage indicator had difficulties in measuring events and responses that were not based on biomedical models. Findings suggest that the indicator can be improved by incorporating other dimensions of quality.

  2. Responses of aquatic macrophytes to anthropogenic pressures: comparison between macrophyte metrics and indices.

    PubMed

    Camargo, Julio A

    2018-02-26

    Macrophyte responses to anthropogenic pressures in two rivers of Central Spain were assessed to check if simple metrics can exhibit a greater discriminatory and explanatory power than complex indices at small spatial scales. Field surveys were undertaken during the summer of 2014 (Duraton River) and the spring of 2015 (Tajuña River). Aquatic macrophytes were sampled using a sampling square (45 × 45 cm). In the middle Duraton River, macrophytes responded positively to the presence of a hydropower dam and a small weir, with Myriophyllum spicatum and Potamogeton pectinatus being relatively favored. Index of Macrophytes (IM) was better than Macroscopic Aquatic Vegetation Index (MAVI) and Fluvial Macrophyte Index (FMI) in detecting these responses, showing positive and significant correlations with total coverage, species richness, and species diversity. In the upper Tajuña River, macrophytes responded both negatively and positively to the occurrence of a trout farm effluent and a small weir, with Leptodictyum riparium and Veronica anagallis-aquatica being relatively favored. Although IM, MAVI, and FMI detected both negative and positive responses, correlations of IM with total coverage, species richness, and species diversity were higher. Species evenness was not sensitive enough to detect either positive or negative responses of aquatic macrophytes along the study areas. Overall, traditional and simple metrics (species composition, total coverage, species richness, species diversity) exhibited a greater discriminatory and explanatory power than more recent and complex indices (IM, MAVI, FMI) when assessing responses of aquatic macrophytes to anthropogenic pressures at impacted specific sites.

  3. Clearing margin system in the futures markets—Applying the value-at-risk model to Taiwanese data

    NASA Astrophysics Data System (ADS)

    Chiu, Chien-Liang; Chiang, Shu-Mei; Hung, Jui-Cheng; Chen, Yu-Lung

    2006-07-01

    This article sets out to investigate if the TAIFEX has adequate clearing margin adjustment system via unconditional coverage, conditional coverage test and mean relative scaled bias to assess the performance of three value-at-risk (VaR) models (i.e., the TAIFEX, RiskMetrics and GARCH-t). For the same model, original and absolute returns are compared to explore which can accurately capture the true risk. For the same return, daily and tiered adjustment methods are examined to evaluate which corresponds to risk best. The results indicate that the clearing margin adjustment of the TAIFEX cannot reflect true risks. The adjustment rules, including the use of absolute return and tiered adjustment of the clearing margin, have distorted VaR-based margin requirements. Besides, the results suggest that the TAIFEX should use original return to compute VaR and daily adjustment system to set clearing margin. This approach would improve the funds operation efficiency and the liquidity of the futures markets.

  4. 5 CFR 250.201 - Coverage and purpose.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... responsible for designing a set of systems, including standards and metrics, for assessing the management of human capital by Federal agencies. In this subpart, OPM establishes a framework of those systems, including system components, OPM's role, and agency responsibilities. ...

  5. 5 CFR 250.201 - Coverage and purpose.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... responsible for designing a set of systems, including standards and metrics, for assessing the management of human capital by Federal agencies. In this subpart, OPM establishes a framework of those systems, including system components, OPM's role, and agency responsibilities. ...

  6. 5 CFR 250.201 - Coverage and purpose.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... responsible for designing a set of systems, including standards and metrics, for assessing the management of human capital by Federal agencies. In this subpart, OPM establishes a framework of those systems, including system components, OPM's role, and agency responsibilities. ...

  7. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  8. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  9. 7 CFR 1700.101 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Definitions. 1700.101 Section 1700.101 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE... performance metrics, such as debt service coverage requirements and return on investment, and the general...

  10. 7 CFR 1700.101 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Definitions. 1700.101 Section 1700.101 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE... performance metrics, such as debt service coverage requirements and return on investment, and the general...

  11. Leveraging Paraphrase Labels to Extract Synonyms from Twitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoniak, Maria A.; Bell, Eric B.; Xia, Fei

    2015-05-18

    We present an approach for automatically learning synonyms from a paraphrase corpus of tweets. This work shows improvement on the task of paraphrase detection when we substitute our extracted synonyms into the training set. The synonyms are learned by using chunks from a shallow parse to create candidate synonyms and their context windows, and the synonyms are incorporated into a paraphrase detection system that uses machine translation metrics as features for a classifier. We demonstrate a 2.29% improvement in F1 when we train and test on the paraphrase training set, providing better coverage than previous systems, which shows the potentialmore » power of synonyms that are representative of a specific topic.« less

  12. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    PubMed

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  13. A Change Impact Analysis to Characterize Evolving Program Behaviors

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam; Person, Suzette; Branchaud, Joshua

    2012-01-01

    Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks

  14. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    PubMed

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  16. Trade-Space Analysis Tool for Constellations (TAT-C)

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Dabney, Philip; de Weck, Olivier; Foreman, Veronica; Grogan, Paul; Holland, Matthew; Hughes, Steven; Nag, Sreeja

    2016-01-01

    Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: How many spacecraft should be included in the constellation? Which design has the best costrisk value? The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time.This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit Coverage, Reduction Metrics, and Cost Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance.TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.

  17. Trade-space Analysis for Constellations

    NASA Astrophysics Data System (ADS)

    Le Moigne, J.; Dabney, P.; de Weck, O. L.; Foreman, V.; Grogan, P.; Holland, M. P.; Hughes, S. P.; Nag, S.

    2016-12-01

    Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: "How many spacecraft should be included in the constellation? Which design has the best cost/risk value?" The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time. This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit & Coverage, Reduction & Metrics, and Cost& Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance. TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.

  18. Fighter agility metrics, research, and test

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.

    1990-01-01

    Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A completed set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation provided by the NASA Dryden Flight Research Center. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available. Simulation documentation and user instructions are provided in an appendix.

  19. Are two systemic fish assemblage sampling programmes on the upper Mississippi River telling us the same thing?

    USGS Publications Warehouse

    Dukerschein, J.T.; Bartels, A.D.; Ickes, B.S.; Pearson, M.S.

    2013-01-01

    We applied an Index of Biotic Integrity (IBI) used on Wisconsin/Minnesota waters of the upper Mississippi River (UMR) to compare data from two systemic sampling programmes. Ability to use data from multiple sampling programmes could extend spatial and temporal coverage of river assessment and monitoring efforts. We normalized for effort and tested fish community data collected by the Environmental Monitoring and Assessment Program-Great Rivers Ecosystems (EMAP-GRE) 2004–2006 and the Long Term Resource Monitoring Program (LTRMP) 1993–2006. Each programme used daytime electrofishing along main channel borders but with some methodological and design differences. EMAP-GRE, designed for baseline and, eventually, compliance monitoring, used a probabilistic, continuous design. LTRMP, designed primarily for baseline and trend monitoring, used a stratified random design in five discrete study reaches. Analysis of similarity indicated no significant difference between EMAP-GRE and LTRMP IBI scores (n=238; Global R= 0.052; significance level=0.972). Both datasets distinguished clear differences only between 'Fair' and 'Poor' condition categories, potentially supporting a 'pass–fail' assessment strategy. Thirteen years of LTRMP data demonstrated stable IBI scores through time in four of five reaches sampled. LTRMP and EMAPGRE IBI scores correlated along the UMR's upstream to downstream gradient (df [3, 25]; F=1.61; p=0.22). A decline in IBI scores from upstream to downstream was consistent with UMR fish community studies and a previous, empirically modelled human disturbance gradient. Comparability between EMAP-GRE (best upstream to downstream coverage) and LTRMP data (best coverage over time and across the floodplain) supports a next step of developing and testing a systemic, multi-metric fish index on the UMR that both approaches could inform.

  20. A Health Economics Approach to US Value Assessment Frameworks-Summary and Recommendations of the ISPOR Special Task Force Report [7].

    PubMed

    Garrison, Louis P; Neumann, Peter J; Willke, Richard J; Basu, Anirban; Danzon, Patricia M; Doshi, Jalpa A; Drummond, Michael F; Lakdawalla, Darius N; Pauly, Mark V; Phelps, Charles E; Ramsey, Scott D; Towse, Adrian; Weinstein, Milton C

    2018-02-01

    This summary section first lists key points from each of the six sections of the report, followed by six key recommendations. The Special Task Force chose to take a health economics approach to the question of whether a health plan should cover and reimburse a specific technology, beginning with the view that the conventional cost-per-quality-adjusted life-year metric has both strengths as a starting point and recognized limitations. This report calls for the development of a more comprehensive economic evaluation that could include novel elements of value (e.g., insurance value and equity) as part of either an "augmented" cost-effectiveness analysis or a multicriteria decision analysis. Given an aggregation of elements to a measure of value, consistent use of a cost-effectiveness threshold can help ensure the maximization of health gain and well-being for a given budget. These decisions can benefit from the use of deliberative processes. The six recommendations are to: 1) be explicit about decision context and perspective in value assessment frameworks; 2) base health plan coverage and reimbursement decisions on an evaluation of the incremental costs and benefits of health care technologies as is provided by cost-effectiveness analysis; 3) develop value thresholds to serve as one important input to help guide coverage and reimbursement decisions; 4) manage budget constraints and affordability on the basis of cost-effectiveness principles; 5) test and consider using structured deliberative processes for health plan coverage and reimbursement decisions; and 6) explore and test novel elements of benefit to improve value measures that reflect the perspectives of both plan members and patients. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Quantifying Mapping Orbit Performance in the Vicinity of Primitive Bodies

    NASA Technical Reports Server (NTRS)

    Pavlak, Thomas A.; Broschart, Stephen B.; Lantoine, Gregory

    2015-01-01

    Predicting and quantifying the capability of mapping orbits in the vicinity of primitive bodies is challenging given the complex orbit geometries that exist and the irregular shape of the bodies themselves. This paper employs various quantitative metrics to characterize the performance and relative effectiveness of various types of mapping orbits including terminator, quasi-terminator, hovering, pingpong, and conic-like trajectories. Metrics of interest include surface area coverage, lighting conditions, and the variety of viewing angles achieved. The metrics discussed in this investigation are intended to enable mission designers and project stakeholders to better characterize candidate mapping orbits during preliminary mission formulation activities.The goal of this investigation is to understand the trade space associated with carrying out remotesensing campaigns at small primitive bodies in the context of a robotic space mission. Specifically,this study seeks to understand the surface viewing geometries, ranges, etc. that are available fromseveral commonly proposed mapping orbits architectures.

  2. Quantifying Mapping Orbit Performance in the Vicinity of Primitive Bodies

    NASA Technical Reports Server (NTRS)

    Pavlak, Thomas A.; Broschart, Stephen B.; Lantoine, Gregory

    2015-01-01

    Predicting and quantifying the capability of mapping orbits in the vicinity of primitive bodies is challenging given the complex orbit geometries that exist and the irregular shape of the bodies themselves. This paper employs various quantitative metrics to characterize the performance and relative effectiveness of various types of mapping orbits including terminator, quasi-terminator, hovering, ping pong, and conic-like trajectories. Metrics of interest include surface area coverage, lighting conditions, and the variety of viewing angles achieved. The metrics discussed in this investigation are intended to enable mission designers and project stakeholders to better characterize candidate mapping orbits during preliminary mission formulation activities. The goal of this investigation is to understand the trade space associated with carrying out remote sensing campaigns at small primitive bodies in the context of a robotic space mission. Specifically, this study seeks to understand the surface viewing geometries, ranges, etc. that are available from several commonly proposed mapping orbits architectures

  3. Measuring and Specifying Combinatorial Coverage of Test Input Configurations

    PubMed Central

    Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu

    2015-01-01

    A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442

  4. A dynamical systems approach to studying midlatitude weather extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Caballero, Rodrigo; Faranda, Davide

    2017-04-01

    Extreme weather occurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. The ability to predict these events is therefore a topic of crucial importance. Here we propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We show that simple dynamical systems metrics can be used to identify sets of large-scale atmospheric flow patterns with similar spatial structure and temporal evolution on time scales of several days to a week. In regions where these patterns favor extreme weather, they afford a particularly good predictability of the extremes. We specifically test this technique on the atmospheric circulation in the North Atlantic region, where it provides predictability of large-scale wintertime surface temperature extremes in Europe up to 1 week in advance.

  5. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  6. Predicting binding poses and affinities for protein - ligand complexes in the 2015 D3R Grand Challenge using a physical model with a statistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Grudinin, Sergei; Kadukova, Maria; Eisenbarth, Andreas; Marillet, Simon; Cazals, Frédéric

    2016-09-01

    The 2015 D3R Grand Challenge provided an opportunity to test our new model for the binding free energy of small molecules, as well as to assess our protocol to predict binding poses for protein-ligand complexes. Our pose predictions were ranked 3-9 for the HSP90 dataset, depending on the assessment metric. For the MAP4K dataset the ranks are very dispersed and equal to 2-35, depending on the assessment metric, which does not provide any insight into the accuracy of the method. The main success of our pose prediction protocol was the re-scoring stage using the recently developed Convex-PL potential. We make a thorough analysis of our docking predictions made with AutoDock Vina and discuss the effect of the choice of rigid receptor templates, the number of flexible residues in the binding pocket, the binding pocket size, and the benefits of re-scoring. However, the main challenge was to predict experimentally determined binding affinities for two blind test sets. Our affinity prediction model consisted of two terms, a pairwise-additive enthalpy, and a non pairwise-additive entropy. We trained the free parameters of the model with a regularized regression using affinity and structural data from the PDBBind database. Our model performed very well on the training set, however, failed on the two test sets. We explain the drawback and pitfalls of our model, in particular in terms of relative coverage of the test set by the training set and missed dynamical properties from crystal structures, and discuss different routes to improve it.

  7. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  8. Insurance Coverage Policies for Pharmacogenomic and Multi-Gene Testing for Cancer.

    PubMed

    Lu, Christine Y; Loomer, Stephanie; Ceccarelli, Rachel; Mazor, Kathleen M; Sabin, James; Clayton, Ellen Wright; Ginsburg, Geoffrey S; Wu, Ann Chen

    2018-05-16

    Insurance coverage policies are a major determinant of patient access to genomic tests. The objective of this study was to examine differences in coverage policies for guideline-recommended pharmacogenomic tests that inform cancer treatment. We analyzed coverage policies from eight Medicare contractors and 10 private payers for 23 biomarkers (e.g., HER2 and EGFR ) and multi-gene tests. We extracted policy coverage and criteria, prior authorization requirements, and an evidence basis for coverage. We reviewed professional society guidelines and their recommendations for use of pharmacogenomic tests. Coverage for KRAS , EGFR , and BRAF tests were common across Medicare contractors and private payers, but few policies covered PML/RARA , CD25 , or G6PD . Thirteen payers cover multi-gene tests for nonsmall lung cancer, citing emerging clinical recommendations. Coverage policies for single and multi-gene tests for cancer treatments are consistent among Medicare contractors despite the lack of national coverage determinations. In contrast, coverage for these tests varied across private payers. Patient access to tests is governed by prior authorization among eight private payers. Substantial variations in how payers address guideline-recommended pharmacogenomic tests and the common use of prior authorization underscore the need for additional studies of the effects of coverage variation on cancer care and patient outcomes.

  9. Evaluation of nine popular de novo assemblers in microbial genome assembly.

    PubMed

    Forouzan, Esmaeil; Maleki, Masoumeh Sadat Mousavi; Karkhane, Ali Asghar; Yakhchali, Bagher

    2017-12-01

    Next generation sequencing (NGS) technologies are revolutionizing biology, with Illumina being the most popular NGS platform. Short read assembly is a critical part of most genome studies using NGS. Hence, in this study, the performance of nine well-known assemblers was evaluated in the assembly of seven different microbial genomes. Effect of different read coverage and k-mer parameters on the quality of the assembly were also evaluated on both simulated and actual read datasets. Our results show that the performance of assemblers on real and simulated datasets could be significantly different, mainly because of coverage bias. According to outputs on actual read datasets, for all studied read coverages (of 7×, 25× and 100×), SPAdes and IDBA-UD clearly outperformed other assemblers based on NGA50 and accuracy metrics. Velvet is the most conservative assembler with the lowest NGA50 and error rate. Copyright © 2017. Published by Elsevier B.V.

  10. Directional Bias and Pheromone for Discovery and Coverage on Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Berenhaut, Kenneth S.; Oehmen, Christopher S.

    2012-09-11

    Natural multi-agent systems often rely on “correlated random walks” (random walks that are biased toward a current heading) to distribute their agents over a space (e.g., for foraging, search, etc.). Our contribution involves creation of a new movement and pheromone model that applies the concept of heading bias in random walks to a multi-agent, digital-ants system designed for cyber-security monitoring. We examine the relative performance effects of both pheromone and heading bias on speed of discovery of a target and search-area coverage in a two-dimensional network layout. We found that heading bias was unexpectedly helpful in reducing search time andmore » that it was more influential than pheromone for improving coverage. We conclude that while pheromone is very important for rapid discovery, heading bias can also greatly improve both performance metrics.« less

  11. Determination of a Screening Metric for High Diversity DNA Libraries.

    PubMed

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  12. Orion Flight Performance Design Trades

    NASA Technical Reports Server (NTRS)

    Jackson, Mark C.; Straube, Timothy

    2010-01-01

    A significant portion of the Orion pre-PDR design effort has focused on balancing mass with performance. High level performance metrics include abort success rates, lunar surface coverage, landing accuracy and touchdown loads. These metrics may be converted to parameters that affect mass, such as ballast for stabilizing the abort vehicle, propellant to achieve increased lunar coverage or extended missions, or ballast to increase the lift-to-drag ratio to improve entry and landing performance. The Orion Flight Dynamics team was tasked to perform analyses to evaluate many of these trades. These analyses not only provide insight into the physics of each particular trade but, in aggregate, they illustrate the processes used by Orion to balance performance and mass margins, and thereby make design decisions. Lessons learned can be gleaned from a review of these studies which will be useful to other spacecraft system designers. These lessons fall into several categories, including: appropriate application of Monte Carlo analysis in design trades, managing margin in a highly mass-constrained environment, and the use of requirements to balance margin between subsystems and components. This paper provides a review of some of the trades and analyses conducted by the Flight Dynamics team, as well as systems engineering lessons learned.

  13. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troia, Matthew J.; McManamay, Ryan A.

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  14. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE PAGES

    Troia, Matthew J.; McManamay, Ryan A.

    2016-06-12

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  15. Improving draft genome contiguity with reference-derived in silico mate-pair libraries.

    PubMed

    Grau, José Horacio; Hackl, Thomas; Koepfli, Klaus-Peter; Hofreiter, Michael

    2018-05-01

    Contiguous genome assemblies are a highly valued biological resource because of the higher number of completely annotated genes and genomic elements that are usable compared to fragmented draft genomes. Nonetheless, contiguity is difficult to obtain if only low coverage data and/or only distantly related reference genome assemblies are available. In order to improve genome contiguity, we have developed Cross-Species Scaffolding-a new pipeline that imports long-range distance information directly into the de novo assembly process by constructing mate-pair libraries in silico. We show how genome assembly metrics and gene prediction dramatically improve with our pipeline by assembling two primate genomes solely based on ∼30x coverage of shotgun sequencing data.

  16. Multiscale Drivers of Global Environmental Health

    NASA Astrophysics Data System (ADS)

    Desai, Manish Anil

    In this dissertation, I motivate, develop, and demonstrate three such approaches for investigating multiscale drivers of global environmental health: (1) a metric for analyzing contributions and responses to climate change from global to sectoral scales, (2) a framework for unraveling the influence of environmental change on infectious diseases at regional to local scales, and (3) a model for informing the design and evaluation of clean cooking interventions at community to household scales. The full utility of climate debt as an analytical perspective will remain untapped without tools that can be manipulated by a wide range of analysts, including global environmental health researchers. Chapter 2 explains how international natural debt (IND) apportions global radiative forcing from fossil fuel carbon dioxide and methane, the two most significant climate altering pollutants, to individual entities -- primarily countries but also subnational states and economic sectors, with even finer scales possible -- as a function of unique trajectories of historical emissions, taking into account the quite different radiative efficiencies and atmospheric lifetimes of each pollutant. Owing to its straightforward and transparent derivation, IND can readily operationalize climate debt to consider issues of equity and efficiency and drive scenario exercises that explore the response to climate change at multiple scales. Collectively, the analyses presented in this chapter demonstrate how IND can inform a range of key question on climate change mitigation at multiple scales, compelling environmental health towards an appraisal of the causes and not just the consequences of climate change. The environmental change and infectious disease (EnvID) conceptual framework of Chapter 3 builds on a rich history of prior efforts in epidemiologic theory, environmental science, and mathematical modeling by: (1) articulating a flexible and logical system specification; (2) incorporating transmission groupings linked to public health intervention strategies; (3) emphasizing the intersection of proximal environmental characteristics and transmission cycles; (4) incorporating a matrix formulation to identify knowledge gaps and facilitate an integration of research; and (5) highlighting hypothesis generation amidst dynamic processes. A systems based approach leverages the reality that studies relevant to environmental change and infectious disease are embedded within a wider web of interactions. As scientific understanding advances, the EnvID framework can help integrate the various factors at play in determining environment-disease relationships and the connections between intrinsically multiscale causal networks. In Chapter 4, the coverage effect model functions primarily as a "proof of concept" analysis to address whether the efficacy of a clean cooking technology may be determined by the extent of not only household level use but also community level coverage. Such coverage dependent efficacy, or a "coverage effect," would transform how interventions are studied and deployed. Ensemble results are consistent with the concept that an appreciable coverage effect from clean cooking interventions can manifest within moderately dense communities. Benefits for users derive largely from direct effects; initially, at low coverage levels, almost exclusively so. Yet, as coverage expands within a user's community, a coverage effect becomes increasingly beneficial. In contrast, non users, despite also experiencing comparable exposure reductions from community-level intervention use, cannot proportionately benefit because their exposures remain overwhelmingly dominated by household-level use of traditional solid fuel cookstoves. The coverage effect model strengthens the rationale for public health programs and policies to encourage clean cooking technologies with an added incentive to realize high coverage within contiguous areas. The implications of the modeling exercise extend to priorities for data collection, underscoring the importance of outdoor pollution concentrations during, as well as before and/or after, community cooking windows and also routine measurement of ventilation, meteorology, time activity patterns, and cooking practices. The possibility of a coverage effect necessitates appropriate strategies to estimate not only direct effects but also coverage and total effects to avoid impaired conclusions. The specter of accelerating social and ecological change challenges efforts to respond to climate change, re/emerging infectious diseases, and household air pollution. Environmental health possesses a well-established and well-tested repertoire of methods but contending with multiscale drivers of risk requires complementary approaches, as well. Integrating metrics, frameworks, and models -- and their insights -- into its analytical arsenal can help global environmental health meet the challenges of today and tomorrow. (Abstract shortened by ProQuest.).

  17. Automated discovery of local search heuristics for satisfiability testing.

    PubMed

    Fukunaga, Alex S

    2008-01-01

    The development of successful metaheuristic algorithms such as local search for a difficult problem such as satisfiability testing (SAT) is a challenging task. We investigate an evolutionary approach to automating the discovery of new local search heuristics for SAT. We show that several well-known SAT local search algorithms such as Walksat and Novelty are composite heuristics that are derived from novel combinations of a set of building blocks. Based on this observation, we developed CLASS, a genetic programming system that uses a simple composition operator to automatically discover SAT local search heuristics. New heuristics discovered by CLASS are shown to be competitive with the best Walksat variants, including Novelty+. Evolutionary algorithms have previously been applied to directly evolve a solution for a particular SAT instance. We show that the heuristics discovered by CLASS are also competitive with these previous, direct evolutionary approaches for SAT. We also analyze the local search behavior of the learned heuristics using the depth, mobility, and coverage metrics proposed by Schuurmans and Southey.

  18. Do altmetrics work? Twitter and ten other social web services.

    PubMed

    Thelwall, Mike; Haustein, Stefanie; Larivière, Vincent; Sugimoto, Cassidy R

    2013-01-01

    Altmetric measurements derived from the social web are increasingly advocated and used as early indicators of article impact and usefulness. Nevertheless, there is a lack of systematic scientific evidence that altmetrics are valid proxies of either impact or utility although a few case studies have reported medium correlations between specific altmetrics and citation rates for individual journals or fields. To fill this gap, this study compares 11 altmetrics with Web of Science citations for 76 to 208,739 PubMed articles with at least one altmetric mention in each case and up to 1,891 journals per metric. It also introduces a simple sign test to overcome biases caused by different citation and usage windows. Statistically significant associations were found between higher metric scores and higher citations for articles with positive altmetric scores in all cases with sufficient evidence (Twitter, Facebook wall posts, research highlights, blogs, mainstream media and forums) except perhaps for Google+ posts. Evidence was insufficient for LinkedIn, Pinterest, question and answer sites, and Reddit, and no conclusions should be drawn about articles with zero altmetric scores or the strength of any correlation between altmetrics and citations. Nevertheless, comparisons between citations and metric values for articles published at different times, even within the same year, can remove or reverse this association and so publishers and scientometricians should consider the effect of time when using altmetrics to rank articles. Finally, the coverage of all the altmetrics except for Twitter seems to be low and so it is not clear if they are prevalent enough to be useful in practice.

  19. Do Altmetrics Work? Twitter and Ten Other Social Web Services

    PubMed Central

    Thelwall, Mike; Haustein, Stefanie; Larivière, Vincent; Sugimoto, Cassidy R.

    2013-01-01

    Altmetric measurements derived from the social web are increasingly advocated and used as early indicators of article impact and usefulness. Nevertheless, there is a lack of systematic scientific evidence that altmetrics are valid proxies of either impact or utility although a few case studies have reported medium correlations between specific altmetrics and citation rates for individual journals or fields. To fill this gap, this study compares 11 altmetrics with Web of Science citations for 76 to 208,739 PubMed articles with at least one altmetric mention in each case and up to 1,891 journals per metric. It also introduces a simple sign test to overcome biases caused by different citation and usage windows. Statistically significant associations were found between higher metric scores and higher citations for articles with positive altmetric scores in all cases with sufficient evidence (Twitter, Facebook wall posts, research highlights, blogs, mainstream media and forums) except perhaps for Google+ posts. Evidence was insufficient for LinkedIn, Pinterest, question and answer sites, and Reddit, and no conclusions should be drawn about articles with zero altmetric scores or the strength of any correlation between altmetrics and citations. Nevertheless, comparisons between citations and metric values for articles published at different times, even within the same year, can remove or reverse this association and so publishers and scientometricians should consider the effect of time when using altmetrics to rank articles. Finally, the coverage of all the altmetrics except for Twitter seems to be low and so it is not clear if they are prevalent enough to be useful in practice. PMID:23724101

  20. Hydrologic response to stormwater control measures in urban watersheds

    NASA Astrophysics Data System (ADS)

    Bell, Colin D.; McMillan, Sara K.; Clinton, Sandra M.; Jefferson, Anne J.

    2016-10-01

    Stormwater control measures (SCMs) are designed to mitigate deleterious effects of urbanization on river networks, but our ability to predict the cumulative effect of multiple SCMs at watershed scales is limited. The most widely used metric to quantify impacts of urban development, total imperviousness (TI), does not contain information about the extent of stormwater control. We analyzed the discharge records of 16 urban watersheds in Charlotte, NC spanning a range of TI (4.1-54%) and area mitigated with SCMs (1.3-89%). We then tested multiple watershed metrics that quantify the degree of urban impact and SCM mitigation to determine which best predicted hydrologic response across sites. At the event time scale, linear models showed TI to be the best predictor of both peak unit discharge and rainfall-runoff ratios across a range of storm sizes. TI was also a strong driver of both a watershed's capacity to buffer small (e.g., 1-10 mm) rain events, and the relationship between peak discharge and precipitation once that buffering capacity is exceeded. Metrics containing information about SCMs did not appear as primary predictors of event hydrologic response, suggesting that the level of SCM mitigation in many urban watersheds is insufficient to influence hydrologic response. Over annual timescales, impervious surfaces unmitigated by SCMs and tree coverage were best correlated with streamflow flashiness and water yield, respectively. The shift in controls from the event scale to the annual scale has important implications for water resource management, suggesting that overall limitation of watershed imperviousness rather than partial mitigation by SCMs may be necessary to alleviate the hydrologic impacts of urbanization.

  1. Theoretical Benefits of Dynamic Collimation in Pencil Beam Scanning Proton Therapy for Brain Tumors: Dosimetric and Radiobiological Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, Alexandra, E-mail: alexandra-moignier@uiowa.edu; Gelover, Edgar; Wang, Dongxu

    Purpose: To quantify the dosimetric benefit of using a dynamic collimation system (DCS) for penumbra reduction during the treatment of brain tumors by pencil beam scanning proton therapy (PBS PT). Methods and Materials: Collimated and uncollimated brain treatment plans were created for 5 patients previously treated with PBS PT and retrospectively enrolled in an institutional review board–approved study. The in-house treatment planning system, RDX, was used to generate the plans because it is capable of modeling both collimated and uncollimated beamlets. The clinically delivered plans were reproduced with uncollimated plans in terms of target coverage and organ at risk (OAR) sparingmore » to ensure a clinically relevant starting point, and collimated plans were generated to improve the OAR sparing while maintaining target coverage. Physical and biological comparison metrics, such as dose distribution conformity, mean and maximum doses, normal tissue complication probability, and risk of secondary brain cancer, were used to evaluate the plans. Results: The DCS systematically improved the dose distribution conformity while preserving the target coverage. The average reduction of the mean dose to the 10-mm ring surrounding the target and the healthy brain were 13.7% (95% confidence interval [CI] 11.6%-15.7%; P<.0001) and 25.1% (95% CI 16.8%-33.4%; P<.001), respectively. This yielded an average reduction of 24.8% (95% CI 0.8%-48.8%; P<.05) for the brain necrosis normal tissue complication probability using the Flickinger model, and 25.1% (95% CI 16.8%-33.4%; P<.001) for the risk of secondary brain cancer. A general improvement of the OAR sparing was also observed. Conclusion: The lateral penumbra reduction afforded by the DCS increases the normal tissue sparing capabilities of PBS PT for brain cancer treatment while preserving target coverage.« less

  2. Anatomic tibial component design can increase tibial coverage and rotational alignment accuracy: a comparison of six contemporary designs.

    PubMed

    Dai, Yifei; Scuderi, Giles R; Bischoff, Jeffrey E; Bertin, Kim; Tarabichi, Samih; Rajgopal, Ashok

    2014-12-01

    The aim of this study was to comprehensively evaluate contemporary tibial component designs against global tibial anatomy. We hypothesized that anatomically designed tibial components offer increased morphological fit to the resected proximal tibia with increased alignment accuracy compared to symmetric and asymmetric designs. Using a multi-ethnic bone dataset, six contemporary tibial component designs were investigated, including anatomic, asymmetric, and symmetric design types. Investigations included (1) measurement of component conformity to the resected tibia using a comprehensive set of size and shape metrics; (2) assessment of component coverage on the resected tibia while ensuring clinically acceptable levels of rotation and overhang; and (3) evaluation of the incidence and severity of component downsizing due to adherence to rotational alignment and overhang requirements, and the associated compromise in tibial coverage. Differences in coverage were statistically compared across designs and ethnicities, as well as between placements with or without enforcement of proper rotational alignment. Compared to non-anatomic designs investigated, the anatomic design exhibited better conformity to resected tibial morphology in size and shape, higher tibial coverage (92% compared to 85-87%), more cortical support (posteromedial region), lower incidence of downsizing (3% compared to 39-60%), and less compromise of tibial coverage (0.5% compared to 4-6%) when enforcing proper rotational alignment. The anatomic design demonstrated meaningful increase in tibial coverage with accurate rotational alignment compared to symmetric and asymmetric designs, suggesting its potential for less intra-operative compromises and improved performance. III.

  3. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  4. Coverage criteria for test case generation using UML state chart diagram

    NASA Astrophysics Data System (ADS)

    Salman, Yasir Dawood; Hashim, Nor Laily; Rejab, Mawarny Md; Romli, Rohaida; Mohd, Haslina

    2017-10-01

    To improve the effectiveness of test data generation during the software test, many studies have focused on the automation of test data generation from UML diagrams. One of these diagrams is the UML state chart diagram. Test cases are generally evaluated according to coverage criteria. However, combinations of multiple criteria are required to achieve better coverage. Different studies used various number and types of coverage criteria in their methods and approaches. The objective of this paper to propose suitable coverage criteria for test case generation using UML state chart diagram especially in handling loops. In order to achieve this objective, this work reviewed previous studies to present the most practical coverage criteria combinations, including all-states, all-transitions, all-transition-pairs, and all-loop-free-paths coverage. Calculation to determine the coverage percentage of the proposed coverage criteria were presented together with an example has they are applied on a UML state chart diagram. This finding would be beneficial in the area of test case generating especially in handling loops in UML state chart diagram.

  5. Multidimensional metrics for estimating phage abundance, distribution, gene density, and sequence coverage in metagenomes

    PubMed Central

    Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.

    2015-01-01

    Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set of publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. We propose adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution. PMID:26005436

  6. Multidimensional metrics for estimating phage abundance, distribution, gene density, and sequence coverage in metagenomes

    DOE PAGES

    Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; ...

    2015-05-08

    Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set ofmore » publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. By adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.« less

  7. Multidimensional metrics for estimating phage abundance, distribution, gene density, and sequence coverage in metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia

    Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set ofmore » publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. By adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.« less

  8. The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness.

    PubMed

    Liolios, Konstantinos; Schriml, Lynn; Hirschman, Lynette; Pagani, Ioanna; Nosrat, Bahador; Sterk, Peter; White, Owen; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; Kyrpides, Nikos C; Field, Dawn

    2012-07-30

    Variability in the extent of the descriptions of data ('metadata') held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the 'Metadata Coverage Index' (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the 'Minimum Information about a Genome Sequence' (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.

  9. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  10. Quantification of interplay and gradient effects for lung stereotactic ablative radiotherapy (SABR) treatments.

    PubMed

    Tyler, Madelaine K

    2016-01-08

    This study quantified the interplay and gradient effects on GTV dose coverage for 3D CRT, dMLC IMRT, and VMAT SABR treatments for target amplitudes of 5-30 mm using 3DVH v3.1 software incorporating 4D Respiratory MotionSim (4D RMS) module. For clinically relevant motion periods (5 s), the interplay effect was small, with deviations in the minimum dose covering the target volume (D99%) of less than ± 2.5% for target amplitudes up to 30 mm. Increasing the period to 60 s resulted in interplay effects of up to ± 15.0% on target D99% dose coverage. The gradient effect introduced by target motion resulted in deviations of up to ± 3.5% in D99% target dose coverage. VMAT treatments showed the largest deviation in dose metrics, which was attributed to the long delivery times in comparison to dMLC IMRT. Retrospective patient analysis indicated minimal interplay and gradient effects for patients treated with dMLC IMRT at the NCCI.

  11. A novel approach to quantifying the spatiotemporal behavior of instrumented grey seals used to sample the environment.

    PubMed

    Baker, Laurie L; Mills Flemming, Joanna E; Jonsen, Ian D; Lidgard, Damian C; Iverson, Sara J; Bowen, W Don

    2015-01-01

    Paired with satellite location telemetry, animal-borne instruments can collect spatiotemporal data describing the animal's movement and environment at a scale relevant to its behavior. Ecologists have developed methods for identifying the area(s) used by an animal (e.g., home range) and those used most intensely (utilization distribution) based on location data. However, few have extended these models beyond their traditional roles as descriptive 2D summaries of point data. Here we demonstrate how the home range method, T-LoCoH, can be expanded to quantify collective sampling coverage by multiple instrumented animals using grey seals (Halichoerus grypus) equipped with GPS tags and acoustic transceivers on the Scotian Shelf (Atlantic Canada) as a case study. At the individual level, we illustrate how time and space-use metrics quantifying individual sampling coverage may be used to determine the rate of acoustic transmissions received. Grey seals collectively sampled an area of 11,308 km (2) and intensely sampled an area of 31 km (2) from June-December. The largest area sampled was in July (2094.56 km (2)) and the smallest area sampled occurred in August (1259.80 km (2)), with changes in sampling coverage observed through time. T-LoCoH provides an effective means to quantify changes in collective sampling effort by multiple instrumented animals and to compare these changes across time. We also illustrate how time and space-use metrics of individual instrumented seal movement calculated using T-LoCoH can be used to account for differences in the amount of time a bioprobe (biological sampling platform) spends in an area.

  12. Wide coverage by volume CT: benefits for cardiac imaging

    NASA Astrophysics Data System (ADS)

    Sablayrolles, Jean-Louis; Cesmeli, Erdogan; Mintandjian, Laura; Adda, Olivier; Dessalles-Martin, Diane

    2005-04-01

    With the development of new technologies, computed tomography (CT) is becoming a strong candidate for non-invasive imaging based tool for cardiac disease assessment. One of the challenges of cardiac CT is that a typical scan involves a breath hold period consisting of several heartbeats, about 20 sec with scanners having a longitudinal coverage of 2 cm, and causing the image quality (IQ) to be negatively impacted since beat to beat variation is high likely to occur without any medication, e.g. beta blockers. Because of this and the preference for shorter breath hold durations, a CT scanner with a wide coverage without the compromise in the spatial and temporal resolution of great clinical value. In this study, we aimed at determining the optimum scan duration and the delay relative to beginning of breath hold, to achieve high IQ. We acquired EKG data from 91 consecutive patients (77 M, 14 F; Age: 57 +/- 14) undergoing cardiac CT exams with contrast, performed on LightSpeed 16 and LightSpeed Pro16. As an IQ metric, we adopted the standard deviation of "beat-to-beat variation" (stdBBV) within a virtual scan period. Two radiologists evaluated images by assigning a score of 1 (worst) to 4 best). We validated stdBBV with the radiologist scores, which resulted in a population distribution of 9.5, 9.5, 31, and 50% for the score groups 1, 2, 3, and 4, respectively. Based on the scores, we defined a threshold for stdBBV and identified an optimum combination of virtual scan period and a delay. With the assumption that the relationship between the stdBBV and diagnosable scan IQ holds, our analysis suggested that the success rate can be improved to 100% with scan durations equal or less than 5 sec with a delay of 1 - 2 sec. We confirmed the suggested conclusion with LightSpeed VCT (GE Healthcare Technologies, Waukesha, WI), which has a wide longitudinal coverage, fine isotropic spatial resolution, and high temporal resolution, e.g. 40 mm coverage per rotation of 0.35 sec. Under the light of this study, LightSpeed VCT lends itself to be a clinically tested unique platform to achieve routine cardiac imaging.

  13. Building structural similarity database for metric learning

    NASA Astrophysics Data System (ADS)

    Jin, Guoxin; Pappas, Thrasyvoulos N.

    2015-03-01

    We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.

  14. Relative Utility of Selected Software Requirement Metrics

    DTIC Science & Technology

    1991-12-01

    testing . They can also help in deciding if and how to use complexity reduction techniques. In summary, requirement metrics can be useful because they...answer items in a test instrument. In order to differentiate between misinterpretation and comprehension, the measurement technique must be able to...effectively test a requirement, it is verifiable. Ramamoorthy and others have proposed requirements complexity metrics that can be used to infer the

  15. Arbitrary Metrics in Psychology

    ERIC Educational Resources Information Center

    Blanton, Hart; Jaccard, James

    2006-01-01

    Many psychological tests have arbitrary metrics but are appropriate for testing psychological theories. Metric arbitrariness is a concern, however, when researchers wish to draw inferences about the true, absolute standing of a group or individual on the latent psychological dimension being measured. The authors illustrate this in the context of 2…

  16. Development and validation of a whole-exome sequencing test for simultaneous detection of point mutations, indels and copy-number alterations for precision cancer care

    PubMed Central

    Rennert, Hanna; Eng, Kenneth; Zhang, Tuo; Tan, Adrian; Xiang, Jenny; Romanel, Alessandro; Kim, Robert; Tam, Wayne; Liu, Yen-Chun; Bhinder, Bhavneet; Cyrta, Joanna; Beltran, Himisha; Robinson, Brian; Mosquera, Juan Miguel; Fernandes, Helen; Demichelis, Francesca; Sboner, Andrea; Kluk, Michael; Rubin, Mark A; Elemento, Olivier

    2016-01-01

    We describe Exome Cancer Test v1.0 (EXaCT-1), the first New York State-Department of Health-approved whole-exome sequencing (WES)-based test for precision cancer care. EXaCT-1 uses HaloPlex (Agilent) target enrichment followed by next-generation sequencing (Illumina) of tumour and matched constitutional control DNA. We present a detailed clinical development and validation pipeline suitable for simultaneous detection of somatic point/indel mutations and copy-number alterations (CNAs). A computational framework for data analysis, reporting and sign-out is also presented. For the validation, we tested EXaCT-1 on 57 tumours covering five distinct clinically relevant mutations. Results demonstrated elevated and uniform coverage compatible with clinical testing as well as complete concordance in variant quality metrics between formalin-fixed paraffin embedded and fresh-frozen tumours. Extensive sensitivity studies identified limits of detection threshold for point/indel mutations and CNAs. Prospective analysis of 337 cancer cases revealed mutations in clinically relevant genes in 82% of tumours, demonstrating that EXaCT-1 is an accurate and sensitive method for identifying actionable mutations, with reasonable costs and time, greatly expanding its utility for advanced cancer care. PMID:28781886

  17. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  18. GPS Device Testing Based on User Performance Metrics

    DOT National Transportation Integrated Search

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  19. An investigation of fighter aircraft agility

    NASA Technical Reports Server (NTRS)

    Valasek, John; Downing, David R.

    1993-01-01

    This report attempts to unify in a single document the results of a series of studies on fighter aircraft agility funded by the NASA Ames Research Center, Dryden Flight Research Facility and conducted at the University of Kansas Flight Research Laboratory during the period January 1989 through December 1993. New metrics proposed by pilots and the research community to assess fighter aircraft agility are collected and analyzed. The report develops a framework for understanding the context into which the various proposed fighter agility metrics fit in terms of application and testing. Since new metrics continue to be proposed, this report does not claim to contain every proposed fighter agility metric. Flight test procedures, test constraints, and related criteria are developed. Instrumentation required to quantify agility via flight test is considered, as is the sensitivity of the candidate metrics to deviations from nominal pilot command inputs, which is studied in detail. Instead of supplying specific, detailed conclusions about the relevance or utility of one candidate metric versus another, the authors have attempted to provide sufficient data and analyses for readers to formulate their own conclusions. Readers are therefore ultimately responsible for judging exactly which metrics are 'best' for their particular needs. Additionally, it is not the intent of the authors to suggest combat tactics or other actual operational uses of the results and data in this report. This has been left up to the user community. Twenty of the candidate agility metrics were selected for evaluation with high fidelity, nonlinear, non real-time flight simulation computer programs of the F-5A Freedom Fighter, F-16A Fighting Falcon, F-18A Hornet, and X-29A. The information and data presented on the 20 candidate metrics which were evaluated will assist interested readers in conducting their own extensive investigations. The report provides a definition and analysis of each metric; details of how to test and measure the metric, including any special data reduction requirements; typical values for the metric obtained using one or more aircraft types; and a sensitivity analysis if applicable. The report is organized as follows. The first chapter in the report presents a historical review of air combat trends which demonstrate the need for agility metrics in assessing the combat performance of fighter aircraft in a modern, all-aspect missile environment. The second chapter presents a framework for classifying each candidate metric according to time scale (transient, functional, instantaneous), further subdivided by axis (pitch, lateral, axial). The report is then broadly divided into two parts, with the transient agility metrics (pitch lateral, axial) covered in chapters three, four, and five, and the functional agility metrics covered in chapter six. Conclusions, recommendations, and an extensive reference list and biography are also included. Five appendices contain a comprehensive list of the definitions of all the candidate metrics; a description of the aircraft models and flight simulation programs used for testing the metrics; several relations and concepts which are fundamental to the study of lateral agility; an in-depth analysis of the axial agility metrics; and a derivation of the relations for the instantaneous agility and their approximations.

  20. Algebraic Approach for Recovering Topology in Distributed Camera Networks

    DTIC Science & Technology

    2009-01-14

    not valid for camera networks. Spatial sam- pling of plenoptic function [2] from a network of cameras is rarely i.i.d. (independent and identi- cally...coverage can be used to track and compare paths in a wireless camera network without any metric calibration information. In particular, these results can...edition edition, 2000. [14] A. Rahimi, B. Dunagan, and T. Darrell. Si- multaneous calibration and tracking with a network of non-overlapping sensors. In

  1. SU-E-I-02: Characterizing Low-Contrast Resolution for Non-Circular CBCT Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A; Pan, X; Pelizzari, C

    Purpose: The use of non-circular scanning trajectories with optimization-basedreconstruction algorithms can be used in conjunction with non-planaracquisition geometries for axial field-of-view (FOV) extension incone-beam CT (CBCT). To evaluate the utility of these trajectories,quantitative image quality metrics should be evaluated. Low-contrastresolution (LCR) and CT number accuracy are significant challenges forCBCT. With unprecedented axial coverage provided by thesetrajectories, measuring such metrics throughout the axial range iscritical. There are currently no phantoms designed to measurelow-contrast resolution over such an extended volume. Methods: The CATPHAN (The Phantom Laboratory, Salem NY) is the current standardfor image quality evaluation. While providing several useful modulesfor different evaluationmore » metrics, each module was designed to beevaluated in a single slice and not for comparison across axialpositions. To characterize the LCR and HU accuracy over an extendedaxial length, we have designed and built a phantom with evaluationmodules at multiple and adjustable axial positions. Results: The modules were made from a cast polyurethane resin. Holes rangingfrom 1/8 to 5/8 inch were added at a constant radius from the modulecenter into which rods of two different plastic materials were pressedto provide two nominal levels of contrast (1.0% and 0.5%). Largerholes were bored to accept various RMI plugs with known electrondensities for HU accuracy evaluation. The modules can be inserted intoan acrylic tube long enough to cover the entire axial FOV and theirpositions adjusted to desired evaluation points. Conclusion: This phantom allows us to measure the LCR and HU accuracy across theaxial coverage within a single acquisition. These metrics can be usedto characterize the impact different trajectories and reconstructionparameters have on clinically relevant image quality performancemetrics. Funding was provided in part by Varian Medical Systems and NIH R01 Grants Nos. CA158446, CA182264, EB018102, and EB000225. The contents of this poster are solely the responsibility of the authors and do not necessarily represent the official view of any of the supporting organizations.« less

  2. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures.

    PubMed

    Epele, Luis Beltrán; Miserendino, María Laura

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services.

  3. Figure of merit for macrouniformity based on image quality ruler evaluation and machine learning framework

    NASA Astrophysics Data System (ADS)

    Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.

    2013-01-01

    Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.

  4. Metric Education in Mathematics Methods Classes.

    ERIC Educational Resources Information Center

    Trent, John H.

    A pre-test on knowledge of the metric system was administered to elementary mathematics methods classes at the University of Nevada at the beginning of the 1975 Spring Semester. A one-hour lesson was prepared and taught regarding metric length, weight, volume, and temperature. At the end of the semester the original test was given as the…

  5. Mean Abnormal Result Rate: Proof of Concept of a New Metric for Benchmarking Selectivity in Laboratory Test Ordering.

    PubMed

    Naugler, Christopher T; Guo, Maggie

    2016-04-01

    There is a need to develop and validate new metrics to access the appropriateness of laboratory test requests. The mean abnormal result rate (MARR) is a proposed measure of ordering selectivity, the premise being that higher mean abnormal rates represent more selective test ordering. As a validation of this metric, we compared the abnormal rate of lab tests with the number of tests ordered on the same requisition. We hypothesized that requisitions with larger numbers of requested tests represent less selective test ordering and therefore would have a lower overall abnormal rate. We examined 3,864,083 tests ordered on 451,895 requisitions and found that the MARR decreased from about 25% if one test was ordered to about 7% if nine or more tests were ordered, consistent with less selectivity when more tests were ordered. We then examined the MARR for community-based testing for 1,340 family physicians and found both a wide variation in MARR as well as an inverse relationship between the total tests ordered per year per physician and the physician-specific MARR. The proposed metric represents a new utilization metric for benchmarking relative selectivity of test orders among physicians. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Transfer of uncertainty of space-borne high resolution rainfall products at ungauged regions

    NASA Astrophysics Data System (ADS)

    Tang, Ling

    Hydrologically relevant characteristics of high resolution (˜ 0.25 degree, 3 hourly) satellite rainfall uncertainty were derived as a function of season and location using a six year (2002-2007) archive of National Aeronautics and Space Administration (NASA)'s Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) precipitation data. The Next Generation Radar (NEXRAD) Stage IV rainfall data over the continental United States was used as ground validation (GV) data. A geostatistical mapping scheme was developed and tested for transfer (i.e., spatial interpolation) of uncertainty information from GV regions to the vast non-GV regions by leveraging the error characterization work carried out in the earlier step. The open question explored here was, "If 'error' is defined on the basis of independent ground validation (GV) data, how are error metrics estimated for a satellite rainfall data product without the need for much extensive GV data?" After a quantitative analysis of the spatial and temporal structure of the satellite rainfall uncertainty, a proof-of-concept geostatistical mapping scheme (based on the kriging method) was evaluated. The idea was to understand how realistic the idea of 'transfer' is for the GPM era. It was found that it was indeed technically possible to transfer error metrics from a gauged to an ungauged location for certain error metrics and that a regionalized error metric scheme for GPM may be possible. The uncertainty transfer scheme based on a commonly used kriging method (ordinary kriging) was then assessed further at various timescales (climatologic, seasonal, monthly and weekly), and as a function of the density of GV coverage. The results indicated that if a transfer scheme for estimating uncertainty metrics was finer than seasonal scale (ranging from 3-6 hourly to weekly-monthly), the effectiveness for uncertainty transfer worsened significantly. Next, a comprehensive assessment of different kriging methods for spatial transfer (interpolation) of error metrics was performed. Three kriging methods for spatial interpolation are compared, which are: ordinary kriging (OK), indicator kriging (IK) and disjunctive kriging (DK). Additional comparison with the simple inverse distance weighting (IDW) method was also performed to quantify the added benefit (if any) of using geostatistical methods. The overall performance ranking of the kriging methods was found to be as follows: OK=DK > IDW > IK. Lastly, various metrics of satellite rainfall uncertainty were identified for two large continental landmasses that share many similar Koppen climate zones, United States and Australia. The dependence of uncertainty as a function of gauge density was then investigated. The investigation revealed that only the first and second ordered moments of error are most amenable to a Koppen-type climate type classification in different continental landmasses.

  7. Ability of LANDSAT-8 Oli Derived Texture Metrics in Estimating Aboveground Carbon Stocks of Coppice Oak Forests

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sohrabi, H.

    2016-06-01

    The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics were able to explain about half of the variation in aboveground carbon stocks. These results demonstrated that Landsat 8 derived texture metrics can be applied for mapping aboveground carbon stocks of coppice Oak Forests in large areas.

  8. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu; Chmura, Steven J.; Salama, Joseph K.

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) againstmore » OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements.« less

  9. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven J; Salama, Joseph K; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Robinson, Clifford G; Pisansky, Thomas M; Winter, Kathryn A; White, Julia R; Xiao, Ying; Matuszak, Martha M

    2017-01-01

    The NRG-BR001 trial is the first National Cancer Institute-sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm 3 was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Novel Methods for Optically Measuring Whitecaps Under Natural Wave Breaking Conditions in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Randolph, K. L.; Dierssen, H. M.; Cifuentes-Lorenzen, A.; Balch, W. M.; Monahan, E. C.; Zappa, C. J.; Drapeau, D.; Bowler, B.

    2016-02-01

    Breaking waves on the ocean surface mark areas of significant importance to air-sea flux estimates of gas, aerosols, and heat. Traditional methods of measuring whitecap coverage using digital photography can miss features that are small in size or do not show high enough contrast to the background. The geometry of the images collected captures the near surface, bright manifestations of the whitecap feature and miss a portion of the bubble plume that is responsible for the production of sea salt aerosols and the transfer of lower solubility gases. Here, a novel method for accurately measuring both the fractional coverage of whitecaps and the intensity and decay rate of whitecap events using above water radiometry is presented. The methodology was developed using data collected during the austral summer in the Atlantic sector of the Southern Ocean under a large range of wind (speeds of 1 to 15 m s-1) and wave (significant wave heights 2 to 8 m) conditions as part of the Southern Ocean Gas Exchange experiment. Whitecap metrics were retrieved by employing a magnitude threshold based on the interquartile range of the radiance or reflectance signal for a single channel (411 nm) after a baseline removal, determined using a moving minimum/maximum filter. Breaking intensity and decay rate metrics were produced from the integration of, and the exponential fit to, radiance or reflectance over the lifetime of the whitecap. When compared to fractional whitecap coverage measurements obtained from high resolution digital images, radiometric estimates were consistently higher because they capture more of the decaying bubble plume area that is difficult to detect with photography. Radiometrically-retrieved whitecap measurements are presented in the context of concurrently measured meteorological (e.g., wind speed) and oceanographic (e.g., wave) data. The optimal fit of the radiometrically estimated whitecap coverage to the instantaneous wind speed, determined using ordinary least squares, showed a cubic dependence. Increasing the magnitude threshold for whitecap detection from 2 to 3(IQR) produced a wind speed-whitecap relationship most comparable to previously published and widely accepted wind speed-whitecap parameterizations.

  11. Wind turbine wake characterization from temporally disjunct 3-D measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doubrawa, Paula; Barthelmie, Rebecca J.; Wang, Hui

    Scanning LiDARs can be used to obtain three-dimensional wind measurements in and beyond the atmospheric surface layer. In this work, metrics characterizing wind turbine wakes are derived from LiDAR observations and from large-eddy simulation (LES) data, which are used to recreate the LiDAR scanning geometry. The metrics are calculated for two-dimensional planes in the vertical and cross-stream directions at discrete distances downstream of a turbine under single-wake conditions. The simulation data are used to estimate the uncertainty when mean wake characteristics are quantified from scanning LiDAR measurements, which are temporally disjunct due to the time that the instrument takes tomore » probe a large volume of air. Based on LES output, we determine that wind speeds sampled with the synthetic LiDAR are within 10% of the actual mean values and that the disjunct nature of the scan does not compromise the spatial variation of wind speeds within the planes. We propose scanning geometry density and coverage indices, which quantify the spatial distribution of the sampled points in the area of interest and are valuable to design LiDAR measurement campaigns for wake characterization. Lastly, we find that scanning geometry coverage is important for estimates of the wake center, orientation and length scales, while density is more important when seeking to characterize the velocity deficit distribution.« less

  12. Wind turbine wake characterization from temporally disjunct 3-D measurements

    DOE PAGES

    Doubrawa, Paula; Barthelmie, Rebecca J.; Wang, Hui; ...

    2016-11-10

    Scanning LiDARs can be used to obtain three-dimensional wind measurements in and beyond the atmospheric surface layer. In this work, metrics characterizing wind turbine wakes are derived from LiDAR observations and from large-eddy simulation (LES) data, which are used to recreate the LiDAR scanning geometry. The metrics are calculated for two-dimensional planes in the vertical and cross-stream directions at discrete distances downstream of a turbine under single-wake conditions. The simulation data are used to estimate the uncertainty when mean wake characteristics are quantified from scanning LiDAR measurements, which are temporally disjunct due to the time that the instrument takes tomore » probe a large volume of air. Based on LES output, we determine that wind speeds sampled with the synthetic LiDAR are within 10% of the actual mean values and that the disjunct nature of the scan does not compromise the spatial variation of wind speeds within the planes. We propose scanning geometry density and coverage indices, which quantify the spatial distribution of the sampled points in the area of interest and are valuable to design LiDAR measurement campaigns for wake characterization. Lastly, we find that scanning geometry coverage is important for estimates of the wake center, orientation and length scales, while density is more important when seeking to characterize the velocity deficit distribution.« less

  13. Weights and Measures: Out of Sync.

    ERIC Educational Resources Information Center

    Melone, Rudy J.

    1985-01-01

    Looks at the economic problems resulting from U.S. resistance to the metric system. Considers factors underpinning this resistance and reasons for industry and education to accelerate the pace of metric learning. Describes Gavilan College's pilot testing of metrics instructional materials. Lists organizations endorsing metrication. (DMM)

  14. Structural texture similarity metrics for image analysis and retrieval.

    PubMed

    Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L

    2013-07-01

    We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.

  15. CUQI: cardiac ultrasound video quality index

    PubMed Central

    Razaak, Manzoor; Martini, Maria G.

    2016-01-01

    Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715

  16. Metric half-span model support system

    NASA Technical Reports Server (NTRS)

    Jackson, C. M., Jr.; Dollyhigh, S. M.; Shaw, D. S. (Inventor)

    1982-01-01

    A model support system used to support a model in a wind tunnel test section is described. The model comprises a metric, or measured, half-span supported by a nonmetric, or nonmeasured half-span which is connected to a sting support. Moments and forces acting on the metric half-span are measured without interference from the support system during a wind tunnel test.

  17. Planning Coverage Campaigns for Mission Design and Analysis: CLASP for DESDynl

    NASA Technical Reports Server (NTRS)

    Knight, Russell L.; McLaren, David A.; Hu, Steven

    2013-01-01

    Mission design and analysis presents challenges in that almost all variables are in constant flux, yet the goal is to achieve an acceptable level of performance against a concept of operations, which might also be in flux. To increase responsiveness, automated planning tools are used that allow for the continual modification of spacecraft, ground system, staffing, and concept of operations, while returning metrics that are important to mission evaluation, such as area covered, peak memory usage, and peak data throughput. This approach was applied to the DESDynl mission design using the CLASP planning system, but since this adaptation, many techniques have changed under the hood for CLASP, and the DESDynl mission concept has undergone drastic changes. The software produces mission evaluation products, such as memory highwater marks, coverage percentages, given a mission design in the form of coverage targets, concept of operations, spacecraft parameters, and orbital parameters. It tries to overcome the lack of fidelity and timeliness of mission requirements coverage analysis during mission design. Previous techniques primarily use Excel in ad hoc fashion to approximate key factors in mission performance, often falling victim to overgeneralizations necessary in such an adaptation. The new program allows designers to faithfully represent their mission designs quickly, and get more accurate results just as quickly.

  18. Volumetrically-Derived Global Navigation Satellite System Performance Assessment from the Earths Surface through the Terrestrial Service Volume and the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user from the Earth's surface through the Terrestrial Service Volume (TSV) to the edge of the Space Service Volume (SSV), when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative was recently expanded to compare nadir-facing and zenith-facing user hemispherical antenna coverage with omnidirectional antenna coverage at different distances of 8,000 km altitude and 36,000 km altitude. This report summarizes the performance using these antenna coverage techniques at distances ranging from 100 km altitude to 36,000 km to be all encompassing, as well as the volumetrically-derived system availability metrics.

  19. Quantification of interplay and gradient effects for lung stereotactic ablative radiotherapy (SABR) treatments

    PubMed Central

    2016-01-01

    This study quantified the interplay and gradient effects on GTV dose coverage for 3D CRT, dMLC IMRT, and VMAT SABR treatments for target amplitudes of 5–30 mm using 3DVH v3.1 software incorporating 4D Respiratory MotionSim (4D RMS) module. For clinically relevant motion periods (5 s), the interplay effect was small, with deviations in the minimum dose covering the target volume (D99%) of less than ±2.5% for target amplitudes up to 30 mm. Increasing the period to 60 s resulted in interplay effects of up to ±15.0% on target D99% dose coverage. The gradient effect introduced by target motion resulted in deviations of up to ±3.5% in D99% target dose coverage. VMAT treatments showed the largest deviation in dose metrics, which was attributed to the long delivery times in comparison to dMLC IMRT. Retrospective patient analysis indicated minimal interplay and gradient effects for patients treated with dMLC IMRT at the NCCI. PACS numbers: 87.55.km, 87.56.Fc PMID:26894347

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr

    Mobile Ad hoc NETworks (MANETs) are distributed self-organizing networks that can change locations and configure themselves on the fly. This paper focuses on an algorithmic approach for the deployment of a MANET within an enclosed area, such as a building in a disaster scenario, which can provide a robust communication infrastructure for search and rescue operations. While a virtual spring mesh (VSM) algorithm provides scalable, self-organizing, and fault-tolerant capabilities required by aMANET, the VSM lacks the MANET's capabilities of deployment mechanisms for blanket coverage of an area and does not provide an obstacle avoidance mechanism. This paper presents a newmore » technique, an extended VSM (EVSM) algorithm that provides the following novelties: (1) new control laws for exploration and expansion to provide blanket coverage, (2) virtual adaptive springs enabling the mesh to expand as necessary, (3) adapts to communications disturbances by varying the density and movement of mobile nodes, and (4) new metrics to assess the performance of the EVSM algorithm. Simulation results show that EVSM provides up to 16% more coverage and is 3.5 times faster than VSM in environments with eight obstacles.« less

  1. Earth Observation for monitoring phenology for european land use and ecosystems over 1998-2011

    NASA Astrophysics Data System (ADS)

    Ceccherini, Guido; Gobron, Nadine

    2013-04-01

    Long-term measurements of plant phenology have been used to track vegetation responses to climate change but are often limited to particular species and locations and may not represent synoptic patterns. Given the limitations of working directly with in-situ data, many researchers have instead used available satellite remote sensing. Remote sensing extends the possible spatial coverage and temporal range of phenological assessments of environmental change due to the greater availability of observations. Variations and trends of vegetation dynamics are important because they alter the surface carbon, water and energy balance. For example, the net ecosystem CO2 exchange of vegetation is strongly linked to length of the growing season: extentions and decreases in length of growing season modify carbon uptake and the amount of CO2 in the atmosphere. Advances and delays in starting of growing season also affect the surface energy balance and consequently transpiration. The Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) is a key climate variable identified by Global Terrestrial Observing System (GTOS) that can be monitored from space. This dimensionless variable - varying between 0 and 1- is directly linked to the photosynthetic activity of vegetation, and therefore, can monitor changes in phenology. In this study, we identify the spatio/temporal patterns of vegetation dynamics using a long-term remotely sensed FAPAR dataset over Europe. Our aim is to provide a quantitative analysis of vegetation dynamics relevant to climate studies in Europe. As part of this analysis, six vegetation phenological metrics have been defined and made routinely in Europe. Over time, such metrics can track simple, yet critical, impacts of climate change on ecosystems. Validation has been performed through a direct comparison against ground-based data over ecological sites. Subsequently, using the spatio/temporal variability of this suite of metrics, we classify areas with similar vegetation dynamics. This permits assessment of variations and trends of vegetation dynamics over Europe. Statistical tests to assess the significance of temporal changes are used to evaluate trends in the metrics derived from the recorded time series of the FAPAR.

  2. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  3. Why the Oregon CCO experiment could founder.

    PubMed

    Stecker, Eric C

    2014-08-01

    The most recent Oregon Medicaid experiment is the boldest attempt yet to limit health care spending. Oregon's approach using a Medicaid waiver from the Centers for Medicare and Medicaid Services utilizes global payments with two-sided risk at two levels - coordinated care organizations (CCOs) and the state. Equally important, the Oregon experiment mandates coverage of medical, behavioral, and dental health care using flexible coverage, with the locus of delivery innovation focused at the individual CCO level and with financial consequences for quality-of-care metrics. But insightful design alone is insufficient to overcome the vexing challenge of cost containment on a two- to five-year time horizon; well-tuned execution is also necessary. There are a number of reasons that the Oregon CCO model faces an uphill struggle in implementing the envisioned design. Copyright © 2014 by Duke University Press.

  4. The Adult Conversion to Metrics: Is Education Enough?

    ERIC Educational Resources Information Center

    Kundel, Susan E.

    1979-01-01

    The American College Testing Program sought to determine whether metric education for adult consumers would result in more positive attitudes to metric conversion. Examining preopinion, pretest, posttest, post-opinion, and background data, the researchers found that simply teaching adults how to use the metric system does not significantly affect…

  5. Spatial-temporal forecasting the sunspot diagram

    NASA Astrophysics Data System (ADS)

    Covas, Eurico

    2017-09-01

    Aims: We attempt to forecast the Sun's sunspot butterfly diagram in both space (I.e. in latitude) and time, instead of the usual one-dimensional time series forecasts prevalent in the scientific literature. Methods: We use a prediction method based on the non-linear embedding of data series in high dimensions. We use this method to forecast both in latitude (space) and in time, using a full spatial-temporal series of the sunspot diagram from 1874 to 2015. Results: The analysis of the results shows that it is indeed possible to reconstruct the overall shape and amplitude of the spatial-temporal pattern of sunspots, but that the method in its current form does not have real predictive power. We also apply a metric called structural similarity to compare the forecasted and the observed butterfly cycles, showing that this metric can be a useful addition to the usual root mean square error metric when analysing the efficiency of different prediction methods. Conclusions: We conclude that it is in principle possible to reconstruct the full sunspot butterfly diagram for at least one cycle using this approach and that this method and others should be explored since just looking at metrics such as sunspot count number or sunspot total area coverage is too reductive given the spatial-temporal dynamical complexity of the sunspot butterfly diagram. However, more data and/or an improved approach is probably necessary to have true predictive power.

  6. Experimental constraints on metric and non-metric theories of gravity

    NASA Technical Reports Server (NTRS)

    Will, Clifford M.

    1989-01-01

    Experimental constraints on metric and non-metric theories of gravitation are reviewed. Tests of the Einstein Equivalence Principle indicate that only metric theories of gravity are likely to be viable. Solar system experiments constrain the parameters of the weak field, post-Newtonian limit to be close to the values predicted by general relativity. Future space experiments will provide further constraints on post-Newtonian gravity.

  7. Environmental Quality and Aquatic Invertebrate Metrics Relationships at Patagonian Wetlands Subjected to Livestock Grazing Pressures

    PubMed Central

    2015-01-01

    Livestock grazing can compromise the biotic integrity and health of wetlands, especially in remotes areas like Patagonia, which provide habitat for several endemic terrestrial and aquatic species. Understanding the effects of these land use practices on invertebrate communities can help prevent the deterioration of wetlands and provide insights for restoration. In this contribution, we assessed the responses of 36 metrics based on the structural and functional attributes of invertebrates (130 taxa) at 30 Patagonian wetlands that were subject to different levels of livestock grazing intensity. These levels were categorized as low, medium and high based on eight features (livestock stock densities plus seven wetland measurements). Significant changes in environmental features were detected across the gradient of wetlands, mainly related to pH, conductivity, and nutrient values. Regardless of rainfall gradient, symptoms of eutrophication were remarkable at some highly disturbed sites. Seven invertebrate metrics consistently and accurately responded to livestock grazing on wetlands. All of them were negatively related to increased levels of grazing disturbance, with the number of insect families appearing as the most robust measure. A multivariate approach (RDA) revealed that invertebrate metrics were significantly affected by environmental variables related to water quality: in particular, pH, conductivity, dissolved oxygen, nutrient concentrations, and the richness and coverage of aquatic plants. Our results suggest that the seven aforementioned metrics could be used to assess ecological quality in the arid and semi-arid wetlands of Patagonia, helping to ensure the creation of protected areas and their associated ecological services. PMID:26448652

  8. Using virtual reality simulation to assess competence in video-assisted thoracoscopic surgery (VATS) lobectomy.

    PubMed

    Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars

    2017-06-01

    The societies of thoracic surgery are working to incorporate simulation and competency-based assessment into specialty training. One challenge is the development of a simulation-based test, which can be used as an assessment tool. The study objective was to establish validity evidence for a virtual reality simulator test of a video-assisted thoracoscopic surgery (VATS) lobectomy of a right upper lobe. Participants with varying experience in VATS lobectomy were included. They were familiarized with a virtual reality simulator (LapSim ® ) and introduced to the steps of the procedure for a VATS right upper lobe lobectomy. The participants performed two VATS lobectomies on the simulator with a 5-min break between attempts. Nineteen pre-defined simulator metrics were recorded. Fifty-three participants from nine different countries were included. High internal consistency was found for the metrics with Cronbach's alpha coefficient for standardized items of 0.91. Significant test-retest reliability was found for 15 of the metrics (p-values <0.05). Significant correlations between the metrics and the participants VATS lobectomy experience were identified for seven metrics (p-values <0.001), and 10 metrics showed significant differences between novices (0 VATS lobectomies performed) and experienced surgeons (>50 VATS lobectomies performed). A pass/fail level defined as approximately one standard deviation from the mean metric scores for experienced surgeons passed none of the novices (0 % false positives) and failed four of the experienced surgeons (29 % false negatives). This study is the first to establish validity evidence for a VATS right upper lobe lobectomy virtual reality simulator test. Several simulator metrics demonstrated significant differences between novices and experienced surgeons and pass/fail criteria for the test were set with acceptable consequences. This test can be used as a first step in assessing thoracic surgery trainees' VATS lobectomy competency.

  9. Defining epitope coverage requirements for T cell-based HIV vaccines: Theoretical considerations and practical applications

    PubMed Central

    2011-01-01

    Background HIV vaccine development must address the genetic diversity and plasticity of the virus that permits the presentation of diverse genetic forms to the immune system and subsequent escape from immune pressure. Assessment of potential HIV strain coverage by candidate T cell-based vaccines (whether natural sequence or computationally optimized products) is now a critical component in interpreting candidate vaccine suitability. Methods We have utilized an N-mer identity algorithm to represent T cell epitopes and explore potential coverage of the global HIV pandemic using natural sequences derived from candidate HIV vaccines. Breadth (the number of T cell epitopes generated) and depth (the variant coverage within a T cell epitope) analyses have been incorporated into the model to explore vaccine coverage requirements in terms of the number of discrete T cell epitopes generated. Results We show that when multiple epitope generation by a vaccine product is considered a far more nuanced appraisal of the potential HIV strain coverage of the vaccine product emerges. By considering epitope breadth and depth several important observations were made: (1) epitope breadth requirements to reach particular levels of vaccine coverage, even for natural sequence-based vaccine products is not necessarily an intractable problem for the immune system; (2) increasing the valency (number of T cell epitope variants present) of vaccine products dramatically decreases the epitope requirements to reach particular coverage levels for any epidemic; (3) considering multiple-hit models (more than one exact epitope match with an incoming HIV strain) places a significantly higher requirement upon epitope breadth in order to reach a given level of coverage, to the point where low valency natural sequence based products would not practically be able to generate sufficient epitopes. Conclusions When HIV vaccine sequences are compared against datasets of potential incoming viruses important metrics such as the minimum epitope count required to reach a desired level of coverage can be easily calculated. We propose that such analyses can be applied early in the planning stages and during the execution phase of a vaccine trial to explore theoretical and empirical suitability of a vaccine product to a particular epidemic setting. PMID:22152192

  10. An Examination of Selected Software Testing Tools: 1992

    DTIC Science & Technology

    1992-12-01

    Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows

  11. A Framework for Orbital Performance Evaluation in Distributed Space Missions for Earth Observation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Miller, David W.; de Weck, Olivier

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth science missions owing to their unique ability to increase observation sampling in spatial, spectral and temporal dimensions simultaneously. DSM architectures have a large number of design variables and since they are expected to increase mission flexibility, scalability, evolvability and robustness, their design is a complex problem with many variables and objectives affecting performance. There are very few open-access tools available to explore the tradespace of variables which allow performance assessment and are easy to plug into science goals, and therefore select the most optimal design. This paper presents a software tool developed on the MATLAB engine interfacing with STK, for DSM orbit design and selection. It is capable of generating thousands of homogeneous constellation or formation flight architectures based on pre-defined design variable ranges and sizing those architectures in terms of predefined performance metrics. The metrics can be input into observing system simulation experiments, as available from the science teams, allowing dynamic coupling of science and engineering designs. Design variables include but are not restricted to constellation type, formation flight type, FOV of instrument, altitude and inclination of chief orbits, differential orbital elements, leader satellites, latitudes or regions of interest, planes and satellite numbers. Intermediate performance metrics include angular coverage, number of accesses, revisit coverage, access deterioration over time at every point of the Earth's grid. The orbit design process can be streamlined and variables more bounded along the way, owing to the availability of low fidelity and low complexity models such as corrected HCW equations up to high precision STK models with J2 and drag. The tool can thus help any scientist or program manager select pre-Phase A, Pareto optimal DSM designs for a variety of science goals without having to delve into the details of the engineering design process.

  12. National-scale aboveground biomass geostatistical mapping with FIA inventory and GLAS data: Preparation for sparsely sampled lidar assisted forest inventory

    NASA Astrophysics Data System (ADS)

    Babcock, C. R.; Finley, A. O.; Andersen, H. E.; Moskal, L. M.; Morton, D. C.; Cook, B.; Nelson, R.

    2017-12-01

    Upcoming satellite lidar missions, such as GEDI and IceSat-2, are designed to collect laser altimetry data from space for narrow bands along orbital tracts. As a result lidar metric sets derived from these sources will not be of complete spatial coverage. This lack of complete coverage, or sparsity, means traditional regression approaches that consider lidar metrics as explanatory variables (without error) cannot be used to generate wall-to-wall maps of forest inventory variables. We implement a coregionalization framework to jointly model sparsely sampled lidar information and point-referenced forest variable measurements to create wall-to-wall maps with full probabilistic uncertainty quantification of all inputs. We inform the model with USFS Forest Inventory and Analysis (FIA) in-situ forest measurements and GLAS lidar data to spatially predict aboveground forest biomass (AGB) across the contiguous US. We cast our model within a Bayesian hierarchical framework to better model complex space-varying correlation structures among the lidar metrics and FIA data, which yields improved prediction and uncertainty assessment. To circumvent computational difficulties that arise when fitting complex geostatistical models to massive datasets, we use a Nearest Neighbor Gaussian process (NNGP) prior. Results indicate that a coregionalization modeling approach to leveraging sampled lidar data to improve AGB estimation is effective. Further, fitting the coregionalization model within a Bayesian mode of inference allows for AGB quantification across scales ranging from individual pixel estimates of AGB density to total AGB for the continental US with uncertainty. The coregionalization framework examined here is directly applicable to future spaceborne lidar acquisitions from GEDI and IceSat-2. Pairing these lidar sources with the extensive FIA forest monitoring plot network using a joint prediction framework, such as the coregionalization model explored here, offers the potential to improve forest AGB accounting certainty and provide maps for post-model fitting analysis of the spatial distribution of AGB.

  13. Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography

    USGS Publications Warehouse

    Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.

    1972-01-01

    Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.

  14. Increasing the structural coverage of tuberculosis drug targets.

    PubMed

    Baugh, Loren; Phan, Isabelle; Begley, Darren W; Clifton, Matthew C; Armour, Brianna; Dranow, David M; Taylor, Brandy M; Muruthi, Marvin M; Abendroth, Jan; Fairman, James W; Fox, David; Dieterich, Shellie H; Staker, Bart L; Gardberg, Anna S; Choi, Ryan; Hewitt, Stephen N; Napuli, Alberto J; Myers, Janette; Barrett, Lynn K; Zhang, Yang; Ferrell, Micah; Mundt, Elizabeth; Thompkins, Katie; Tran, Ngoc; Lyons-Abbott, Sally; Abramov, Ariel; Sekar, Aarthi; Serbzhinskiy, Dmitri; Lorimer, Don; Buchko, Garry W; Stacy, Robin; Stewart, Lance J; Edwards, Thomas E; Van Voorhis, Wesley C; Myler, Peter J

    2015-03-01

    High-resolution three-dimensional structures of essential Mycobacterium tuberculosis (Mtb) proteins provide templates for TB drug design, but are available for only a small fraction of the Mtb proteome. Here we evaluate an intra-genus "homolog-rescue" strategy to increase the structural information available for TB drug discovery by using mycobacterial homologs with conserved active sites. Of 179 potential TB drug targets selected for x-ray structure determination, only 16 yielded a crystal structure. By adding 1675 homologs from nine other mycobacterial species to the pipeline, structures representing an additional 52 otherwise intractable targets were solved. To determine whether these homolog structures would be useful surrogates in TB drug design, we compared the active sites of 106 pairs of Mtb and non-TB mycobacterial (NTM) enzyme homologs with experimentally determined structures, using three metrics of active site similarity, including superposition of continuous pharmacophoric property distributions. Pair-wise structural comparisons revealed that 19/22 pairs with >55% overall sequence identity had active site Cα RMSD <1 Å, >85% side chain identity, and ≥80% PSAPF (similarity based on pharmacophoric properties) indicating highly conserved active site shape and chemistry. Applying these results to the 52 NTM structures described above, 41 shared >55% sequence identity with the Mtb target, thus increasing the effective structural coverage of the 179 Mtb targets over three-fold (from 9% to 32%). The utility of these structures in TB drug design can be tested by designing inhibitors using the homolog structure and assaying the cognate Mtb enzyme; a promising test case, Mtb cytidylate kinase, is described. The homolog-rescue strategy evaluated here for TB is also generalizable to drug targets for other diseases. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Increasing the structural coverage of tuberculosis drug targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baugh, Loren; Phan, Isabelle; Begley, Darren W.

    High-resolution three-dimensional structures of essential Mycobacterium tuberculosis (Mtb) proteins provide templates for TB drug design, but are available for only a small fraction of the Mtb proteome. Here we evaluate an intra-genus “homolog-rescue” strategy to increase the structural information available for TB drug discovery by using mycobacterial homologs with conserved active sites. We found that of 179 potential TB drug targets selected for x-ray structure determination, only 16 yielded a crystal structure. By adding 1675 homologs from nine other mycobacterial species to the pipeline, structures representing an additional 52 otherwise intractable targets were solved. To determine whether these homolog structuresmore » would be useful surrogates in TB drug design, we compared the active sites of 106 pairs of Mtb and non-TB mycobacterial (NTM) enzyme homologs with experimentally determined structures, using three metrics of active site similarity, including superposition of continuous pharmacophoric property distributions. Pair-wise structural comparisons revealed that 19/22 pairs with >55% overall sequence identity had active site Cα RMSD <1 Å, >85% side chain identity, and ≥80% PS APF (similarity based on pharmacophoric properties) indicating highly conserved active site shape and chemistry. Applying these results to the 52 NTM structures described above, 41 shared >55% sequence identity with the Mtb target, thus increasing the effective structural coverage of the 179 Mtb targets over three-fold (from 9% to 32%). The utility of these structures in TB drug design can be tested by designing inhibitors using the homolog structure and assaying the cognate Mtb enzyme; a promising test case, Mtb cytidylate kinase, is described. The homolog-rescue strategy evaluated here for TB is also generalizable to drug targets for other diseases.« less

  16. Increasing the structural coverage of tuberculosis drug targets

    DOE PAGES

    Baugh, Loren; Phan, Isabelle; Begley, Darren W.; ...

    2014-12-19

    High-resolution three-dimensional structures of essential Mycobacterium tuberculosis (Mtb) proteins provide templates for TB drug design, but are available for only a small fraction of the Mtb proteome. Here we evaluate an intra-genus “homolog-rescue” strategy to increase the structural information available for TB drug discovery by using mycobacterial homologs with conserved active sites. We found that of 179 potential TB drug targets selected for x-ray structure determination, only 16 yielded a crystal structure. By adding 1675 homologs from nine other mycobacterial species to the pipeline, structures representing an additional 52 otherwise intractable targets were solved. To determine whether these homolog structuresmore » would be useful surrogates in TB drug design, we compared the active sites of 106 pairs of Mtb and non-TB mycobacterial (NTM) enzyme homologs with experimentally determined structures, using three metrics of active site similarity, including superposition of continuous pharmacophoric property distributions. Pair-wise structural comparisons revealed that 19/22 pairs with >55% overall sequence identity had active site Cα RMSD <1 Å, >85% side chain identity, and ≥80% PS APF (similarity based on pharmacophoric properties) indicating highly conserved active site shape and chemistry. Applying these results to the 52 NTM structures described above, 41 shared >55% sequence identity with the Mtb target, thus increasing the effective structural coverage of the 179 Mtb targets over three-fold (from 9% to 32%). The utility of these structures in TB drug design can be tested by designing inhibitors using the homolog structure and assaying the cognate Mtb enzyme; a promising test case, Mtb cytidylate kinase, is described. The homolog-rescue strategy evaluated here for TB is also generalizable to drug targets for other diseases.« less

  17. Increasing the Structural Coverage of Tuberculosis Drug Targets

    PubMed Central

    Baugh, Loren; Phan, Isabelle; Begley, Darren W.; Clifton, Matthew C.; Armour, Brianna; Dranow, David M.; Taylor, Brandy M.; Muruthi, Marvin M.; Abendroth, Jan; Fairman, James W.; Fox, David; Dieterich, Shellie H.; Staker, Bart L.; Gardberg, Anna S.; Choi, Ryan; Hewitt, Stephen N.; Napuli, Alberto J.; Myers, Janette; Barrett, Lynn K.; Zhang, Yang; Ferrell, Micah; Mundt, Elizabeth; Thompkins, Katie; Tran, Ngoc; Lyons-Abbott, Sally; Abramov, Ariel; Sekar, Aarthi; Serbzhinskiy, Dmitri; Lorimer, Don; Buchko, Garry W.; Stacy, Robin; Stewart, Lance J.; Edwards, Thomas E.; Van Voorhis, Wesley C.; Myler, Peter J.

    2015-01-01

    High-resolution three-dimensional structures of essential Mycobacterium tuberculosis (Mtb) proteins provide templates for TB drug design, but are available for only a small fraction of the Mtb proteome. Here we evaluate an intra-genus “homolog-rescue” strategy to increase the structural information available for TB drug discovery by using mycobacterial homologs with conserved active sites. Of 179 potential TB drug targets selected for x-ray structure determination, only 16 yielded a crystal structure. By adding 1675 homologs from nine other mycobacterial species to the pipeline, structures representing an additional 52 otherwise intractable targets were solved. To determine whether these homolog structures would be useful surrogates in TB drug design, we compared the active sites of 106 pairs of Mtb and non-TB mycobacterial (NTM) enzyme homologs with experimentally determined structures, using three metrics of active site similarity, including superposition of continuous pharmacophoric property distributions. Pair-wise structural comparisons revealed that 19/22 pairs with >55% overall sequence identity had active site Cα RMSD <1Å, >85% side chain identity, and ≥80% PSAPF (similarity based on pharmacophoric properties) indicating highly conserved active site shape and chemistry. Applying these results to the 52 NTM structures described above, 41 shared >55% sequence identity with the Mtb target, thus increasing the effective structural coverage of the 179 Mtb targets over three-fold (from 9% to 32%). The utility of these structures in TB drug design can be tested by designing inhibitors using the homolog structure and assaying the cognate Mtb enzyme; a promising test case, Mtb cytidylate kinase, is described. The homolog-rescue strategy evaluated here for TB is also generalizable to drug targets for other diseases. PMID:25613812

  18. A systematic review of usability test metrics for mobile video streaming apps

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.

    2016-08-01

    This paper presents the results of a systematic review regarding the usability test metrics for mobile video streaming apps. In the study, 238 studies were found, but only 51 relevant papers were eventually selected for the review. The study reveals that time taken for video streaming and the video quality were the two most popular metrics used in the usability tests for mobile video streaming apps. Besides, most of the studies concentrated on the usability of mobile TV as users are switching from traditional TV to mobile TV.

  19. Spatial Characterization of Radio Propagation Channel in Urban Vehicle-to-Infrastructure Environments to Support WSNs Deployment

    PubMed Central

    Granda, Fausto; Azpilicueta, Leyre; Vargas-Rosales, Cesar; Lopez-Iturri, Peio; Aguirre, Erik; Astrain, Jose Javier; Villandangos, Jesus; Falcone, Francisco

    2017-01-01

    Vehicular ad hoc Networks (VANETs) enable vehicles to communicate with each other as well as with roadside units (RSUs). Although there is a significant research effort in radio channel modeling focused on vehicle-to-vehicle (V2V), not much work has been done for vehicle-to-infrastructure (V2I) using 3D ray-tracing tools. This work evaluates some important parameters of a V2I wireless channel link such as large-scale path loss and multipath metrics in a typical urban scenario using a deterministic simulation model based on an in-house 3D Ray-Launching (3D-RL) algorithm at 5.9 GHz. Results show the high impact that the spatial distance; link frequency; placement of RSUs; and factors such as roundabout, geometry and relative position of the obstacles have in V2I propagation channel. A detailed spatial path loss characterization of the V2I channel along the streets and avenues is presented. The 3D-RL results show high accuracy when compared with measurements, and represent more reliably the propagation phenomena when compared with analytical path loss models. Performance metrics for a real test scenario implemented with a VANET wireless sensor network implemented ad-hoc are also described. These results constitute a starting point in the design phase of Wireless Sensor Networks (WSNs) radio-planning in the urban V2I deployment in terms of coverage. PMID:28590429

  20. Spatial Characterization of Radio Propagation Channel in Urban Vehicle-to-Infrastructure Environments to Support WSNs Deployment.

    PubMed

    Granda, Fausto; Azpilicueta, Leyre; Vargas-Rosales, Cesar; Lopez-Iturri, Peio; Aguirre, Erik; Astrain, Jose Javier; Villandangos, Jesus; Falcone, Francisco

    2017-06-07

    Vehicular ad hoc Networks (VANETs) enable vehicles to communicate with each other as well as with roadside units (RSUs). Although there is a significant research effort in radio channel modeling focused on vehicle-to-vehicle (V2V), not much work has been done for vehicle-to-infrastructure (V2I) using 3D ray-tracing tools. This work evaluates some important parameters of a V2I wireless channel link such as large-scale path loss and multipath metrics in a typical urban scenario using a deterministic simulation model based on an in-house 3D Ray-Launching (3D-RL) algorithm at 5.9 GHz. Results show the high impact that the spatial distance; link frequency; placement of RSUs; and factors such as roundabout, geometry and relative position of the obstacles have in V2I propagation channel. A detailed spatial path loss characterization of the V2I channel along the streets and avenues is presented. The 3D-RL results show high accuracy when compared with measurements, and represent more reliably the propagation phenomena when compared with analytical path loss models. Performance metrics for a real test scenario implemented with a VANET wireless sensor network implemented ad-hoc are also described. These results constitute a starting point in the design phase of Wireless Sensor Networks (WSNs) radio-planning in the urban V2I deployment in terms of coverage.

  1. The Nature and Variability of Ensemble Sensitivity Fields that Diagnose Severe Convection

    NASA Astrophysics Data System (ADS)

    Ancell, B. C.

    2017-12-01

    Ensemble sensitivity analysis (ESA) is a statistical technique that uses information from an ensemble of forecasts to reveal relationships between chosen forecast metrics and the larger atmospheric state at various forecast times. A number of studies have employed ESA from the perspectives of dynamical interpretation, observation targeting, and ensemble subsetting toward improved probabilistic prediction of high-impact events, mostly at synoptic scales. We tested ESA using convective forecast metrics at the 2016 HWT Spring Forecast Experiment to understand the utility of convective ensemble sensitivity fields in improving forecasts of severe convection and its individual hazards. The main purpose of this evaluation was to understand the temporal coherence and general characteristics of convective sensitivity fields toward future use in improving ensemble predictability within an operational framework.The magnitude and coverage of simulated reflectivity, updraft helicity, and surface wind speed were used as response functions, and the sensitivity of these functions to winds, temperatures, geopotential heights, and dew points at different atmospheric levels and at different forecast times were evaluated on a daily basis throughout the HWT Spring Forecast experiment. These sensitivities were calculated within the Texas Tech real-time ensemble system, which possesses 42 members that run twice daily to 48-hr forecast time. Here we summarize both the findings regarding the nature of the sensitivity fields and the evaluation of the participants that reflects their opinions of the utility of operational ESA. The future direction of ESA for operational use will also be discussed.

  2. Retrospective evaluation of dosimetric quality for prostate carcinomas treated with 3D conformal, intensity modulated and volumetric modulated arc radiotherapy

    PubMed Central

    Crowe, Scott B; Kairn, Tanya; Middlebrook, Nigel; Hill, Brendan; Christie, David R H; Knight, Richard T; Kenny, John; Langton, Christian M; Trapp, Jamie V

    2013-01-01

    Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across five centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity modulated radiotherapy (IMRT) and 47 treated with volumetric modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organs at risk (OAR), through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each OAR. Statistical significance was evaluated using two-tailed Welch's T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the OAR: with increased compliance with recommended OAR dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT. PMID:26229621

  3. Sigma metrics used to assess analytical quality of clinical chemistry assays: importance of the allowable total error (TEa) target.

    PubMed

    Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten

    2014-07-01

    Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.

  4. Dynamical systems proxies of atmospheric predictability and mid-latitude extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Faranda, Davide; Caballero, Rodrigo; Yiou, Pascal

    2017-04-01

    Extreme weather ocurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. Many extremes (for e.g. storms, heatwaves, cold spells, heavy precipitation) are tied to specific patterns of midlatitude atmospheric circulation. The ability to identify these patterns and use them to enhance the predictability of the extremes is therefore a topic of crucial societal and economic value. We propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We use two simple dynamical systems metrics - local dimension and persistence - to identify sets of similar large-scale atmospheric flow patterns which present a coherent temporal evolution. When these patterns correspond to weather extremes, they therefore afford a particularly good forward predictability. We specifically test this technique on European winter temperatures, whose variability largely depends on the atmospheric circulation in the North Atlantic region. We find that our dynamical systems approach provides predictability of large-scale temperature extremes up to one week in advance.

  5. Pollutant Emissions and Energy Efficiency under Controlled Conditions for Household Biomass Cookstoves and Implications for Metrics Useful in Setting International Test Standards

    EPA Science Inventory

    Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...

  6. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  7. Tri-Squared Mean Cross Comparative Analysis: An Advanced Post Hoc Qualitative and Quantitative Metric for a More In-Depth Examination of the Initial Research Outcomes of the Tri-Square Test

    ERIC Educational Resources Information Center

    Osler, James Edward

    2013-01-01

    This monograph provides an epistemological rational for the design of an advanced novel analysis metric. The metric is designed to analyze the outcomes of the Tri-Squared Test. This methodology is referred to as: "Tri-Squared Mean Cross Comparative Analysis" (given the acronym TSMCCA). Tri-Squared Mean Cross Comparative Analysis involves…

  8. Linguistic analysis of IPCC summaries for policymakers and associated coverage

    NASA Astrophysics Data System (ADS)

    Barkemeyer, Ralf; Dessai, Suraje; Monge-Sanz, Beatriz; Renzi, Barbara Gabriella; Napolitano, Giulio

    2016-03-01

    The Intergovernmental Panel on Climate Change (IPCC) Summary for Policymakers (SPM) is the most widely read section of IPCC reports and the main springboard for the communication of its assessment reports. Previous studies have shown that communicating IPCC findings to a variety of scientific and non-scientific audiences presents significant challenges to both the IPCC and the mass media. Here, we employ widely established sentiment analysis tools and readability metrics to explore the extent to which information published by the IPCC differs from the presentation of respective findings in the popular and scientific media between 1990 and 2014. IPCC SPMs clearly stand out in terms of low readability, which has remained relatively constant despite the IPCC’s efforts to consolidate and readjust its communications policy. In contrast, scientific and quality newspaper coverage has become increasingly readable and emotive. Our findings reveal easy gains that could be achieved in making SPMs more accessible for non-scientific audiences.

  9. Self-deployable mobile sensor networks for on-demand surveillance

    NASA Astrophysics Data System (ADS)

    Miao, Lidan; Qi, Hairong; Wang, Feiyi

    2005-05-01

    This paper studies two interconnected problems in mobile sensor network deployment, the optimal placement of heterogeneous mobile sensor platforms for cost-efficient and reliable coverage purposes, and the self-organizable deployment. We first develop an optimal placement algorithm based on a "mosaicked technology" such that different types of mobile sensors form a mosaicked pattern uniquely determined by the popularity of different types of sensor nodes. The initial state is assumed to be random. In order to converge to the optimal state, we investigate the swarm intelligence (SI)-based sensor movement strategy, through which the randomly deployed sensors can self-organize themselves to reach the optimal placement state. The proposed algorithm is compared with the random movement and the centralized method using performance metrics such as network coverage, convergence time, and energy consumption. Simulation results are presented to demonstrate the effectiveness of the mosaic placement and the SI-based movement.

  10. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  11. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  12. Cervical Cancer Screening in Low-Resource Settings: A Cost-Effectiveness Framework for Valuing Tradeoffs between Test Performance and Program Coverage

    PubMed Central

    Campos, Nicole G.; Castle, Philip E.; Wright, Thomas C.; Kim, Jane J.

    2016-01-01

    As cervical cancer screening programs are implemented in low-resource settings, protocols are needed to maximize health benefits under operational constraints. Our objective was to develop a framework for examining health and economic tradeoffs between screening test sensitivity, population coverage, and follow-up of screen-positive women, to help decision makers identify where program investments yield the greatest value. As an illustrative example, we used an individual-based Monte Carlo simulation model of the natural history of human papillomavirus (HPV) and cervical cancer calibrated to epidemiologic data from Uganda. We assumed once in a lifetime screening at age 35 with two-visit HPV DNA testing or one-visit visual inspection with acetic acid (VIA). We assessed the health and economic tradeoffs that arise between 1) test sensitivity and screening coverage; 2) test sensitivity and loss to follow-up (LTFU) of screen-positive women; and 3) test sensitivity, screening coverage, and LTFU simultaneously. The decline in health benefits associated with sacrificing HPV DNA test sensitivity by 20% (e.g., shifting from provider- to self-collection of specimens) could be offset by gains in coverage if coverage increased by at least 20%. When LTFU was 10%, two-visit HPV DNA testing with 80-90% sensitivity was more effective and more cost-effective than one-visit VIA with 40% sensitivity, and yielded greater health benefits than VIA even as VIA sensitivity increased to 60% and HPV test sensitivity declined to 70%. As LTFU increased, two-visit HPV DNA testing became more costly and less effective than one-visit VIA. Setting-specific data on achievable test sensitivity, coverage, follow-up rates, and programmatic costs are needed to guide programmatic decision making for cervical cancer screening. PMID:25943074

  13. Aerocapture, Entry, Descent and Landing (AEDL) Human Planetary Landing Systems. Section 10: AEDL Analysis, Test and Validation Infrastructure

    NASA Technical Reports Server (NTRS)

    Arnold, J.; Cheatwood, N.; Powell, D.; Wolf, A.; Guensey, C.; Rivellini, T.; Venkatapathy, E.; Beard, T.; Beutter, B.; Laub, B.

    2005-01-01

    Contents include the following: 3 Listing of critical capabilities (knowledge, procedures, training, facilities) and metrics for validating that they are mission ready. Examples of critical capabilities and validation metrics: ground test and simulations. Flight testing to prove capabilities are mission ready. Issues and recommendations.

  14. Landscape metrics for assessment of landscape destruction and rehabilitation.

    PubMed

    Herzog, F; Lausch, A; Müller, E; Thulke, H H; Steinhardt, U; Lehmann, S

    2001-01-01

    This investigation tested the usefulness of geometry-based landscape metrics for monitoring landscapes in a heavily disturbed environment. Research was carried out in a 75 sq km study area in Saxony, eastern Germany, where the landscape has been affected by surface mining and agricultural intensification. Landscape metrics were calculated from digital maps (1912, 1944, 1973, 1989) for the entire study area and for subregions (river valleys, plains), which were defined using the original geology and topography of the region. Correlation and factor analyses were used to select a set of landscape metrics suitable for landscape monitoring. Little land-use change occurred in the first half of the century, but political decisions and technological developments led to considerable change later. Metrics showed a similar pattern with almost no change between 1912 and 1944, but dramatic changes after 1944. Nonparametric statistical methods were used to test whether metrics differed between river valleys and plains. Significant differences in the metrics for these regions were found in the early maps (1912, 1944), but these differences were not significant in 1973 or 1989. These findings indicate that anthropogenic influences created a more home geneous landscape.

  15. A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test

    DTIC Science & Technology

    2012-01-01

    used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss

  16. Theoretical frameworks for testing relativistic gravity: A review

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Will, C. M.; Ni, W.

    1971-01-01

    Metric theories of gravity are presented, including the definition of metric theory, evidence for its existence, and response of matter to gravity with test body trajectories, gravitational red shift, and stressed matter responses. Parametrized post-Newtonian framework and interpretations are reviewed. Gamma, beta and gamma, and varied other parameters were measured. Deflection of electromagnetic waves, radar time delay, geodetic gyroscope precession, perihelion shifts, and periodic effects in orbits are among various studies carried out for metric theory experimentation.

  17. Testing General Relativity with the Reflection Spectrum of the Supermassive Black Hole in 1H0707-495.

    PubMed

    Cao, Zheng; Nampalliwar, Sourabh; Bambi, Cosimo; Dauser, Thomas; García, Javier A

    2018-02-02

    Recently, we have extended the x-ray reflection model relxill to test the spacetime metric in the strong gravitational field of astrophysical black holes. In the present Letter, we employ this extended model to analyze XMM-Newton, NuSTAR, and Swift data of the supermassive black hole in 1H0707-495 and test deviations from a Kerr metric parametrized by the Johannsen deformation parameter α_{13}. Our results are consistent with the hypothesis that the spacetime metric around the black hole in 1H0707-495 is described by the Kerr solution.

  18. Identifying Drug-Target Interactions with Decision Templates.

    PubMed

    Yan, Xiao-Ying; Zhang, Shao-Wu

    2018-01-01

    During the development process of new drugs, identification of the drug-target interactions wins primary concerns. However, the chemical or biological experiments bear the limitation in coverage as well as the huge cost of both time and money. Based on drug similarity and target similarity, chemogenomic methods can be able to predict potential drug-target interactions (DTIs) on a large scale and have no luxurious need about target structures or ligand entries. In order to reflect the cases that the drugs having variant structures interact with common targets and the targets having dissimilar sequences interact with same drugs. In addition, though several other similarity metrics have been developed to predict DTIs, the combination of multiple similarity metrics (especially heterogeneous similarities) is too naïve to sufficiently explore the multiple similarities. In this paper, based on Gene Ontology and pathway annotation, we introduce two novel target similarity metrics to address above issues. More importantly, we propose a more effective strategy via decision template to integrate multiple classifiers designed with multiple similarity metrics. In the scenarios that predict existing targets for new drugs and predict approved drugs for new protein targets, the results on the DTI benchmark datasets show that our target similarity metrics are able to enhance the predictive accuracies in two scenarios. And the elaborate fusion strategy of multiple classifiers has better predictive power than the naïve combination of multiple similarity metrics. Compared with other two state-of-the-art approaches on the four popular benchmark datasets of binary drug-target interactions, our method achieves the best results in terms of AUC and AUPR for predicting available targets for new drugs (S2), and predicting approved drugs for new protein targets (S3).These results demonstrate that our method can effectively predict the drug-target interactions. The software package can freely available at https://github.com/NwpuSY/DT_all.git for academic users. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    NASA Astrophysics Data System (ADS)

    Won, Y. I.; Wanchoo, L.; Behnke, J.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) stores and distributes data from EOS satellites, as well as ancillary, airborne, in-situ, and socio-economic data. Twelve EOSDIS data centers support different scientific disciplines by providing products and services tailored to specific science communities. Although discipline oriented, these data centers provide common data management functions of ingest, archive and distribution, as well as documentation of their data and services on their web-sites. The Earth Science Data and Information System (ESDIS) Project collects these metrics from the EOSDIS data centers on a daily basis through a tool called the ESDIS Metrics System (EMS). These metrics are used in this study. The implementation of the Earthdata Login - formerly known as the User Registration System (URS) - across the various NASA data centers provides the EMS additional information about users obtaining data products from EOSDIS data centers. These additional user attributes collected by the Earthdata login, such as the user's primary area of study can augment the understanding of data usage, which in turn can help the EOSDIS program better understand the users' needs. This study will review the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements such as discipline and study area, will assist in understanding how the users have evolved over time. The data usage pattern based on version numbers may also provide some insight into the level of data quality. In addition, the data metrics by various services such as the Open-source Project for a Network Data Access Protocol (OPeNDAP), Web Map Service (WMS), Web Coverage Service (WCS), and subsets, will address how these services have extended the usage of data. Over-all, this study will present the usage of data and metadata by metrics analyses and will assist data centers in better supporting the needs of the users.

  20. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, S; Mehta, V

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less

  1. Stream Dissolved Organic Matter Quantity and Quality Along a Wetland-Cropland Catchment Gradient

    NASA Astrophysics Data System (ADS)

    McDonough, O.; Hosen, J. D.; Lang, M. W.; Oesterling, R.; Palmer, M.

    2012-12-01

    Wetlands may be critical sources of dissolved organic matter (DOM) to stream networks. Yet, more than half of wetlands in the continental United States have been lost since European settlement, with the majority of loss attributed to agriculture. The degree to which agricultural loss of wetlands impacts stream DOM is largely unknown and may have important ecological implications. Using twenty headwater catchments on the Delmarva Peninsula (Maryland, USA), we investigated the seasonal influence of wetland and cropland coverage on downstream DOM quantity and quality. In addition to quantifying bulk downstream dissolved organic carbon (DOC) concentration, we used a suite of DOM UV-absorbance metrics and parallel factor analysis (PARAFAC) modeling of excitation-emission fluorescence spectra (EEMs) to characterize DOM composition. Percent bioavailable DOC (%BDOC) was measured during the Spring sampling using a 28-day incubation. Percent wetland coverage and % cropland within the watersheds were significantly negatively correlated (r = -0.93, p < 0.001). Results show that % wetland coverage was positively correlated with stream DOM concentration, molecular weight, aromaticity, humic-like fluorescence, and allochthonous origin. Conversely, increased wetland coverage was negatively correlated with stream DOM protein-like fluorescence. Percent BDOC decreased with DOM humic-like fluorescence and increased with protein-like fluorescence. We observed minimal seasonal interaction between % wetland coverage and DOM concentration and composition across Spring, Fall, and Winter sampling seasons. However, principal component analysis suggested more pronounced seasonal differences exist in stream DOM. This study highlights the influence of wetlands on downstream DOM in agriculturally impacted landscapes where loss of wetlands to cultivation may significantly alter stream DOM quantity and quality.

  2. Robustness of Representative Signals Relative to Data Loss Using Atlas-Based Parcellations.

    PubMed

    Gajdoš, Martin; Výtvarová, Eva; Fousek, Jan; Lamoš, Martin; Mikl, Michal

    2018-04-24

    Parcellation-based approaches are an important part of functional magnetic resonance imaging data analysis. They are a necessary processing step for sorting data in structurally or functionally homogenous regions. Real functional magnetic resonance imaging datasets usually do not cover the atlas template completely; they are often spatially constrained due to the physical limitations of MR sequence settings, the inter-individual variability in brain shape, etc. When using a parcellation template, many regions are not completely covered by actual data. This paper addresses the issue of the area coverage required in real data in order to reliably estimate the representative signal and the influence of this kind of data loss on network analysis metrics. We demonstrate this issue on four datasets using four different widely used parcellation templates. We used two erosion approaches to simulate data loss on the whole-brain level and the ROI-specific level. Our results show that changes in ROI coverage have a systematic influence on network measures. Based on the results of our analysis, we recommend controlling the ROI coverage and retaining at least 60% of the area in order to ensure at least 80% of explained variance of the original signal.

  3. AIDS in Black and White: The Influence of Newspaper Coverage of HIV/AIDS on HIV/AIDS Testing Among African Americans and White Americans, 1993–2007

    PubMed Central

    STEVENS, ROBIN; HORNIK, ROBERT C.

    2014-01-01

    This study examined the impact of newspaper coverage of HIV/AIDS on HIV testing behavior in the US population. HIV testing data were taken from the CDC’s National Behavioral Risk Factor Surveillance System (BRFSS) from 1993 to 2007 (n=265,557). News stories from 24 daily newspapers and one wire service during the same time period were content analyzed. Distributed lagged regression models were employed to estimate how well HIV/AIDS newspaper coverage predicted later HIV testing behavior. Increases in HIV/AIDS newspaper coverage were associated with declines in population level HIV testing. Each additional 100 HIV/AIDS related newspaper stories published each month was associated with a 1.7% decline in HIV testing levels in the subsequent month. This effect differed by race, with African Americans exhibiting greater declines in HIV testing subsequent to increased news coverage than did Whites. These results suggest that mainstream newspaper coverage of HIV/AIDS may have a particularly deleterious effect on African Americans, one of the groups most impacted by the disease. The mechanisms driving the negative effect deserve further investigation to improve reporting on HIV/AIDS in the media. PMID:24597895

  4. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  5. Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo

    2010-08-01

    A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.

  6. Comparison of watershed disturbance predictive models for stream benthic macroinvertebrates for three distinct ecoregions in western US

    USGS Publications Warehouse

    Waite, Ian R.; Brown, Larry R.; Kennen, Jonathan G.; May, Jason T.; Cuffney, Thomas F.; Orlando, James L.; Jones, Kimberly A.

    2010-01-01

    The successful use of macroinvertebrates as indicators of stream condition in bioassessments has led to heightened interest throughout the scientific community in the prediction of stream condition. For example, predictive models are increasingly being developed that use measures of watershed disturbance, including urban and agricultural land-use, as explanatory variables to predict various metrics of biological condition such as richness, tolerance, percent predators, index of biotic integrity, functional species traits, or even ordination axes scores. Our primary intent was to determine if effective models could be developed using watershed characteristics of disturbance to predict macroinvertebrate metrics among disparate and widely separated ecoregions. We aggregated macroinvertebrate data from universities and state and federal agencies in order to assemble stream data sets of high enough density appropriate for modeling in three distinct ecoregions in Oregon and California. Extensive review and quality assurance of macroinvertebrate sampling protocols, laboratory subsample counts and taxonomic resolution was completed to assure data comparability. We used widely available digital coverages of land-use and land-cover data summarized at the watershed and riparian scale as explanatory variables to predict macroinvertebrate metrics commonly used by state resource managers to assess stream condition. The “best” multiple linear regression models from each region required only two or three explanatory variables to model macroinvertebrate metrics and explained 41–74% of the variation. In each region the best model contained some measure of urban and/or agricultural land-use, yet often the model was improved by including a natural explanatory variable such as mean annual precipitation or mean watershed slope. Two macroinvertebrate metrics were common among all three regions, the metric that summarizes the richness of tolerant macroinvertebrates (RICHTOL) and some form of EPT (Ephemeroptera, Plecoptera, and Trichoptera) richness. Best models were developed for the same two invertebrate metrics even though the geographic regions reflect distinct differences in precipitation, geology, elevation, slope, population density, and land-use. With further development, models like these can be used to elicit better causal linkages to stream biological attributes or condition and can be used by researchers or managers to predict biological indicators of stream condition at unsampled sites.

  7. Developments in Seismic Data Quality Assessment Using MUSTANG at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Sharer, G.; Keyson, L.; Templeton, M. E.; Weertman, B.; Smith, K.; Sweet, J. R.; Tape, C.; Casey, R. E.; Ahern, T.

    2017-12-01

    MUSTANG is the automated data quality metrics system at the IRIS Data Management Center (DMC), designed to help characterize data and metadata "goodness" across the IRIS data archive, which holds 450 TB of seismic and related earth science data spanning the past 40 years. It calculates 46 metrics ranging from sample statistics and miniSEED state-of-health flag counts to Power Spectral Densities (PSDs) and Probability Density Functions (PDFs). These quality measurements are easily and efficiently accessible to users through the use of web services, which allows users to make requests not only by station and time period but also to filter the results according to metric values that match a user's data requirements. Results are returned in a variety of formats, including XML, JSON, CSV, and text. In the case of PSDs and PDFs, results can also be retrieved as plot images. In addition, there are several user-friendly client tools available for exploring and visualizing MUSTANG metrics: LASSO, MUSTANG Databrowser, and MUSTANGular. Over the past year we have made significant improvements to MUSTANG. We have nearly complete coverage over our archive for broadband channels with sample rates of 20-200 sps. With this milestone achieved, we are now expanding to include higher sample rate, short-period, and strong-motion channels. Data availability metrics will soon be calculated when a request is made which guarantees that the information reflects the current state of the archive and also allows for more flexibility in content. For example, MUSTANG will be able to return a count of gaps for any arbitrary time period instead of being limited to 24 hour spans. We are also promoting the use of data quality metrics beyond the IRIS archive through our recent release of ISPAQ, a Python command-line application that calculates MUSTANG-style metrics for users' local miniSEED files or for any miniSEED data accessible through FDSN-compliant web services. Finally, we will explore how researchers are using MUSTANG in real-world situations to select data, improve station data quality, anticipate station outages and servicing, and characterize site noise and environmental conditions.

  8. Reliability of TMS metrics in patients with chronic incomplete spinal cord injury.

    PubMed

    Potter-Baker, K A; Janini, D P; Frost, F S; Chabra, P; Varnerin, N; Cunningham, D A; Sankarasubramanian, V; Plow, E B

    2016-11-01

    Test-retest reliability analysis in individuals with chronic incomplete spinal cord injury (iSCI). The purpose of this study was to examine the reliability of neurophysiological metrics acquired with transcranial magnetic stimulation (TMS) in individuals with chronic incomplete tetraplegia. Cleveland Clinic Foundation, Cleveland, Ohio, USA. TMS metrics of corticospinal excitability, output, inhibition and motor map distribution were collected in muscles with a higher MRC grade and muscles with a lower MRC grade on the more affected side of the body. Metrics denoting upper limb function were also collected. All metrics were collected at two sessions separated by a minimum of two weeks. Reliability between sessions was determined using Spearman's correlation coefficients and concordance correlation coefficients (CCCs). We found that TMS metrics that were acquired in higher MRC grade muscles were approximately two times more reliable than those collected in lower MRC grade muscles. TMS metrics of motor map output, however, demonstrated poor reliability regardless of muscle choice (P=0.34; CCC=0.51). Correlation analysis indicated that patients with more baseline impairment and/or those in a more chronic phase of iSCI demonstrated greater variability of metrics. In iSCI, reliability of TMS metrics varies depending on the muscle grade of the tested muscle. Variability is also influenced by factors such as baseline motor function and time post SCI. Future studies that use TMS metrics in longitudinal study designs to understand functional recovery should be cautious as choice of muscle and clinical characteristics can influence reliability.

  9. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  10. Evaluating Constraints on Heavy-Ion SEE Susceptibility Imposed by Proton SEE Testing and Other Mixed Environments

    NASA Technical Reports Server (NTRS)

    Ladbury, R. L.; Lauenstein, J.-M.

    2016-01-01

    We develop metrics for assessing the effectiveness of proton SEE data for bounding heavy-ion SEE susceptibility. The metrics range from simple geometric criteria requiring no knowledge of the test articles to bounds of SEE rates.

  11. Reproducibility and repeatability of semi-quantitative 18F-fluorodihydrotestosterone (FDHT) uptake metrics in castration-resistant prostate cancer metastases: a prospective multi-center study.

    PubMed

    Vargas, Hebert Alberto; Kramer, Gem M; Scott, Andrew M; Weickhardt, Andrew; Meier, Andreas A; Parada, Nicole; Beattie, Bradley J; Humm, John L; Staton, Kevin D; Zanzonico, Pat B; Lyashchenko, Serge K; Lewis, Jason S; Yaqub, Maqsood; Sosa, Ramon E; van den Eertwegh, Alfons J; Davis, Ian D; Ackermann, Uwe; Pathmaraj, Kunthi; Schuit, Robert C; Windhorst, Albert D; Chua, Sue; Weber, Wolfgang A; Larson, Steven M; Scher, Howard I; Lammertsma, Adriaan A; Hoekstra, Otto; Morris, Michael J

    2018-04-06

    18 F-fluorodihydrotestosterone ( 18 F-FDHT) is a radiolabeled analogue of the androgen receptor's primary ligand that is currently being credentialed as a biomarker for prognosis, response, and pharmacodynamic effects of new therapeutics. As part of the biomarker qualification process, we prospectively assessed its reproducibility and repeatability in men with metastatic castration-resistant prostate cancer (mCRPC). Methods: We conducted a prospective multi-institutional study of mCRPC patients undergoing two (test/re-test) 18 F-FDHT PET/CT scans on two consecutive days. Two independent readers evaluated all examinations and recorded standardized uptake values (SUVs), androgen receptor-positive tumor volumes (ARTV), and total lesion uptake (TLU) for the most avid lesion detected in each of 32 pre-defined anatomical regions. The relative absolute difference and reproducibility coefficient (RC) of each metric were calculated between the test and re-test scans. Linear regression analyses, intra-class correlation coefficients (ICC), and Bland-Altman plots were used to evaluate repeatability of 18 F-FDHT metrics. The coefficient of variation (COV) and ICC were used to assess inter-observer reproducibility. Results: Twenty-seven patients with 140 18 F-FDHT-avid regions were included. The best repeatability among 18 F-FDHT uptake metrics was found for SUV metrics (SUV max , SUVmean, and SUVpeak), with no significant differences in repeatability found among them. Correlations between the test and re-test scans were strong for all SUV metrics (R2 ≥ 0.92; ICC ≥ 0.97). The RCs of the SUV metrics ranged from 21.3% for SUVpeak to 24.6% for SUV max The test and re-test ARTV and TLU, respectively, were highly correlated (R2 and ICC ≥ 0.97), although variability was significantly higher than that for SUV (RCs > 46.4%). The PSA levels, Gleason score, weight, and age did not affect repeatability, nor did total injected activity, uptake measurement time, or differences in uptake time between the two scans. Including the single most avid lesion per patient, the five most avid lesions per patient, only lesions ≥ 4.2 mL, only lesions with an SUV ≥ 4 g/mL, or normalizing of SUV to area under the parent plasma activity concentration-time curve did not significantly affect repeatability. All metrics showed high inter-observer reproducibility (ICC > 0.98; COV < 0.2-10.8%). Conclusion: 18 F-FDHT is a highly reproducible means of imaging mCRPC. Amongst 18 F-FDHT uptake metrics, SUV had the highest repeatability among the measures assessed. These performance characteristics lend themselves to further biomarker development and clinical qualification of the tracer. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  12. Test Methods for Robot Agility in Manufacturing.

    PubMed

    Downs, Anthony; Harrison, William; Schlenoff, Craig

    2016-01-01

    The paper aims to define and describe test methods and metrics to assess industrial robot system agility in both simulation and in reality. The paper describes test methods and associated quantitative and qualitative metrics for assessing robot system efficiency and effectiveness which can then be used for the assessment of system agility. The paper describes how the test methods were implemented in a simulation environment and real world environment. It also shows how the metrics are measured and assessed as they would be in a future competition. The test methods described in this paper will push forward the state of the art in software agility for manufacturing robots, allowing small and medium manufacturers to better utilize robotic systems. The paper fulfills the identified need for standard test methods to measure and allow for improvement in software agility for manufacturing robots.

  13. Establishing benchmarks and metrics for disruptive technologies, inappropriate and obsolete tests in the clinical laboratory.

    PubMed

    Kiechle, Frederick L; Arcenas, Rodney C; Rogers, Linda C

    2014-01-01

    Benchmarks and metrics related to laboratory test utilization are based on evidence-based medical literature that may suffer from a positive publication bias. Guidelines are only as good as the data reviewed to create them. Disruptive technologies require time for appropriate use to be established before utilization review will be meaningful. Metrics include monitoring the use of obsolete tests and the inappropriate use of lab tests. Test utilization by clients in a hospital outreach program can be used to monitor the impact of new clients on lab workload. A multi-disciplinary laboratory utilization committee is the most effective tool for modifying bad habits, and reviewing and approving new tests for the lab formulary or by sending them out to a reference lab. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. The cost of genetic testing for ocular disease: who pays?

    PubMed

    Capasso, Jenina E

    2014-09-01

    To facilitate ophthalmologists' understanding on the cost of genetic testing in ocular disease, the complexities of insurance coverage and its impact on the availability of testing. Many insurance carriers address coverage for genetic testing in written clinical policies. They provide criteria for medically necessary testing. These policies mostly cover testing for individuals who are symptomatic and in whom testing will have a direct impact on medical treatment. In cases in which no treatments are currently available, other than research trials, patients may have difficulty in getting insurance coverage for genetic testing. Genetic testing for inherited eye diseases can be costly but has many benefits to patient care, including confirmation of a diagnosis, insight into prognostic information, and identification of associated health risks, inheritance patterns, and possible current and future treatments. As gene therapy advances progress, the availability for treatment in ocular diseases, coverage for genetic testing by third-party payers could increase on the basis of current clinical policies.

  15. Foul tip impact attenuation of baseball catcher masks using head impact metrics

    PubMed Central

    White, Terrance R.; Cutcliffe, Hattie C.; Shridharani, Jay K.; Wood, Garrett W.; Bass, Cameron R.

    2018-01-01

    Currently, no scientific consensus exists on the relative safety of catcher mask styles and materials. Due to differences in mass and material properties, the style and material of a catcher mask influences the impact metrics observed during simulated foul ball impacts. The catcher surrogate was a Hybrid III head and neck equipped with a six degree of freedom sensor package to obtain linear accelerations and angular rates. Four mask styles were impacted using an air cannon for six 30 m/s and six 35 m/s impacts to the nasion. To quantify impact severity, the metrics peak linear acceleration, peak angular acceleration, Head Injury Criterion, Head Impact Power, and Gadd Severity Index were used. An Analysis of Covariance and a Tukey’s HSD Test were conducted to compare the least squares mean between masks for each head injury metric. For each injury metric a P-Value less than 0.05 was found indicating a significant difference in mask performance. Tukey’s HSD test found for each metric, the traditional style titanium mask fell in the lowest performance category while the hockey style mask was in the highest performance category. Limitations of this study prevented a direct correlation from mask testing performance to mild traumatic brain injury. PMID:29856814

  16. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  17. Overcoming the effects of false positives and threshold bias in graph theoretical analyses of neuroimaging data.

    PubMed

    Drakesmith, M; Caeyenberghs, K; Dutt, A; Lewis, G; David, A S; Jones, D K

    2015-09-01

    Graph theory (GT) is a powerful framework for quantifying topological features of neuroimaging-derived functional and structural networks. However, false positive (FP) connections arise frequently and influence the inferred topology of networks. Thresholding is often used to overcome this problem, but an appropriate threshold often relies on a priori assumptions, which will alter inferred network topologies. Four common network metrics (global efficiency, mean clustering coefficient, mean betweenness and smallworldness) were tested using a model tractography dataset. It was found that all four network metrics were significantly affected even by just one FP. Results also show that thresholding effectively dampens the impact of FPs, but at the expense of adding significant bias to network metrics. In a larger number (n=248) of tractography datasets, statistics were computed across random group permutations for a range of thresholds, revealing that statistics for network metrics varied significantly more than for non-network metrics (i.e., number of streamlines and number of edges). Varying degrees of network atrophy were introduced artificially to half the datasets, to test sensitivity to genuine group differences. For some network metrics, this atrophy was detected as significant (p<0.05, determined using permutation testing) only across a limited range of thresholds. We propose a multi-threshold permutation correction (MTPC) method, based on the cluster-enhanced permutation correction approach, to identify sustained significant effects across clusters of thresholds. This approach minimises requirements to determine a single threshold a priori. We demonstrate improved sensitivity of MTPC-corrected metrics to genuine group effects compared to an existing approach and demonstrate the use of MTPC on a previously published network analysis of tractography data derived from a clinical population. In conclusion, we show that there are large biases and instability induced by thresholding, making statistical comparisons of network metrics difficult. However, by testing for effects across multiple thresholds using MTPC, true group differences can be robustly identified. Copyright © 2015. Published by Elsevier Inc.

  18. Automatic Integration Testbeds validation on Open Science Grid

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  19. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  20. Recreational-Grade Sidescan Sonar: Transforming a Low-Cost Leisure Gadget into a High Resolution Riverbed Remote Sensing Tool

    NASA Astrophysics Data System (ADS)

    Hamill, D. D.; Buscombe, D.; Wheaton, J. M.; Wilcock, P. R.

    2016-12-01

    The size and spatial organization of bed material, bed texture, is a fundamental physical attribute of lotic ecosystems. Traditional methods to map bed texture (such as physical samples and underwater video) are limited by low spatial coverage, and poor precision in positioning. Recreational grade sidescan sonar systems now offer the possibility of imaging submerged riverbed sediments at coverages and resolutions sufficient to identify subtle changes in bed texture, in any navigable body of water, with minimal cost, expertise in sonar, or logistical effort, thereby facilitating the democratization of acoustic imaging of benthic environments, to support ecohydrological studies in shallow water, not subject to the rigors of hydrographic standards, nor the preserve of hydroacoustic expertise and proprietary hydrographic industry software. We investigate the possibility of using recreational grade sidescan sonar for sedimentary change detection using a case study of repeat sidescan imaging of mixed sand-gravel-rock riverbeds in a debris-fan dominated canyon river, at a coverage and resolution that meets the objectives of studies of the effects of changing bed substrates on salmonid spawning. A repeat substrate mapping analysis on data collected between 2012 and 2015 on the Colorado River in Glen, Marble, and Grand Canyons will be presented. A detailed method has been developed to interpret and analyze non-survey-grade sidescan sonar data, encoded within an open source software tool developed by the authors. An automated technique to quantify bed texture directly from sidescan sonar imagery is tested against bed sediment observations from underwater video and multibeam sonar. Predictive relationships between known bed sediment observations and bed texture metrics could provide an objective means to quantify bed textures and to relate changes in bed texture to biological components of an aquatic ecosystem, at high temporal frequency, and with minimal logistical effort and cost.

  1. A prospective gating method to acquire a diverse set of free-breathing CT images for model-based 4DCT

    NASA Astrophysics Data System (ADS)

    O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.

    2018-02-01

    Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5  ⩽  N  ⩽  9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5  ±  4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while achieving mean model residual within 0.5 mm.

  2. Metric Use in the Tool Industry. A Status Report and a Test of Assessment Methodology.

    DTIC Science & Technology

    1982-04-20

    Weights and Measures) CIM - Computer-Integrated Manufacturing CNC - Computer Numerical Control DOD - Department of Defense DODISS - DOD Index of...numerically-controlled ( CNC ) machines that have an inch-millimeter selection switch and a corresponding dual readout scale. S -4- The use of both metric...satisfactorily met the demands of both domestic and foreign customers for metric machine tools by providing either metric- capable machines or NC and CNC

  3. Establishing Qualitative Software Metrics in Department of the Navy Programs

    DTIC Science & Technology

    2015-10-29

    dedicated to provide the highest quality software to its users. In doing, there is a need for a formalized set of Software Quality Metrics . The goal...of this paper is to establish the validity of those necessary Quality metrics . In our approach we collected the data of over a dozen programs...provide the necessary variable data for our formulas and tested the formulas for validity. Keywords: metrics ; software; quality I. PURPOSE Space

  4. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  5. Lunar, Cislunar, Near/Farside Laser Retroreflectors for the Accurate: Positioning of Landers/Rovers/Hoppers/Orbiters, Commercial Georeferencing, Test of Relativistic Gravity, and Metrics of the Lunar Interior

    NASA Astrophysics Data System (ADS)

    Dell'Agnello, S.; Currie, D.; Ciocci, E.; Contessa, S.; Delle Monache, G.; March, R.; Martini, M.; Mondaini, C.; Porcelli, L.; Salvatori, L.; Tibuzzi, M.; Bianco, G.; Vittori, R.; Chandler, J.; Murphy, T.; Maiello, M.; Petrassi, M.; Lomastro, A.

    2017-10-01

    We developed next-generation lunar, cislunar, near/farside laser retroreflectors for the improved/accurate: Positioning of landers/rovers/hoppers/orbiters, commercial georeferencing, test of relativistic gravity, and metrics of the lunar interior.

  6. Relationships among exceedences of metals criteria, the results of ambient bioassays, and community metrics in mining-impacted streams.

    PubMed

    Griffith, Michael B; Lazorchak, James M; Herlihy, Alan T

    2004-07-01

    If bioassessments are to help diagnose the specific environmental stressors affecting streams, a better understanding is needed of the relationships between community metrics and ambient criteria or ambient bioassays. However, this relationship is not simple, because metrics assess responses at the community level of biological organization, while ambient criteria and ambient bioassays assess or are based on responses at the individual level. For metals, the relationship is further complicated by the influence of other chemical variables, such as hardness, on their bioavailability and toxicity. In 1993 and 1994, U.S. Environmental Protection Agency (U.S. EPA) conducted a Regional Environmental Monitoring and Assessment Program (REMAP) survey on wadeable streams in Colorado's (USA) Southern Rockies Ecoregion. In this ecoregion, mining over the past century has resulted in metals contamination of streams. The surveys collected data on fish and macroinvertebrate assemblages, physical habitat, and sediment and water chemistry and toxicity. These data provide a framework for assessing diagnostic community metrics for specific environmental stressors. We characterized streams as metals-affected based on exceedence of hardness-adjusted criteria for cadmium, copper, lead, and zinc in water; on water toxicity tests (48-h Pimephales promelas and Ceriodaphnia dubia survival); on exceedence of sediment threshold effect levels (TELs); or on sediment toxicity tests (7-d Hyalella azteca survival and growth). Macroinvertebrate and fish metrics were compared among affected and unaffected sites to identify metrics sensitive to metals. Several macroinvertebrate metrics, particularly richness metrics, were less in affected streams, while other metrics were not. This is a function of the sensitivity of the individual metrics to metals effects. Fish metrics were less sensitive to metals because of the low diversity of fish in these streams.

  7. Cost-Effectiveness of Opt-Out Chlamydia Testing for High-Risk Young Women in the U.S.

    PubMed

    Owusu-Edusei, Kwame; Hoover, Karen W; Gift, Thomas L

    2016-08-01

    In spite of chlamydia screening recommendations, U.S. testing coverage continues to be low. This study explored the cost-effectiveness of a patient-directed, universal, opportunistic Opt-Out Testing strategy (based on insurance coverage, healthcare utilization, and test acceptance probabilities) for all women aged 15-24 years compared with current Risk-Based Screening (30% coverage) from a societal perspective. Based on insurance coverage (80%); healthcare utilization (83%); and test acceptance (75%), the proposed Opt-Out Testing strategy would have an expected annual testing coverage of approximately 50% for sexually active women aged 15-24 years. A basic compartmental heterosexual transmission model was developed to account for population-level transmission dynamics. Two groups were assumed based on self-reported sexual activity. All model parameters were obtained from the literature. Costs and benefits were tracked over a 50-year period. The relative sensitivity of the estimated incremental cost-effectiveness ratios to the variables/parameters was determined. This study was conducted in 2014-2015. Based on the model, the Opt-Out Testing strategy decreased the overall chlamydia prevalence by >55% (2.7% to 1.2%). The Opt-Out Testing strategy was cost saving compared with the current Risk-Based Screening strategy. The estimated incremental cost-effectiveness ratio was most sensitive to the female pre-opt out prevalence, followed by the probability of female sequelae and discount rate. The proposed Opt-Out Testing strategy was cost saving, improving health outcomes at a lower net cost than current testing. However, testing gaps would remain because many women might not have health insurance coverage, or not utilize health care. Published by Elsevier Inc.

  8. Dynamics Change of Vegetated Lands in A Highway Corridor during 37 Years (Case study of Jagorawi Toll Road, Jakarta-Bogor)

    NASA Astrophysics Data System (ADS)

    Perdana, B. P.; Setiawan, Y.; Prasetyo, L. B.

    2018-02-01

    Recently, a highway development is required as a liaison between regions to support the economic development of the regions. Even the availability of highways give positive impacts, it also has negative impacts, especially related to the changes of vegetated lands. This study aims to determine the change of vegetation coverage in Jagorawi corridor Jakarta-Bogor during 37 years, and to analyze landscape patterns in the corridor based on distance factor from Jakarta to Bogor. In this study, we used a long-series of Landsat images taken by Landsat 2 MSS (1978), Landsat 5 TM (1988, 1995, and 2005) and Landsat 8 OLI/TIRS (2015). Analysis of landscape metrics was conducted through patch analysis approach to determine the change of landscape patterns in the Jagorawi corridor Jakarta-Bogor. Several parameters of landscape metrics used are Number of Patches (NumP), Mean Patch Size (MPS), Mean Shape Index (MSI), and Edge Density (ED). These parameters can be used to provide information of structural elements of landscape, composition and spatial distribution in the corridor. The results indicated that vegetation coverage in the Jagorawi corridor Jakarta-Bogor decreased about 48% for 35 years. Moreover, NumP value increased and decreasing of MPS value as a means of higher fragmentation level occurs with patch size become smaller. Meanwhile, The increase in ED parameters indicates that vegetated land is damaged annually. MSI parameter shows a decrease in every year which means land degradation on vegetated land. This indicates that the declining value of MSI will have an impact on land degradation.

  9. Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.

    2017-01-01

    With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.

  10. Metrical assessment of cutmarks on bone: is size important?

    PubMed

    Cerutti, E; Magli, F; Porta, D; Gibelli, D; Cattaneo, C

    2014-07-01

    Extrapolating type of blade from a bone lesion has always been a challenge for forensic anthropologists: literature has mainly focused on the morphological characteristics of sharp force lesions, whereas scarce indications are available concerning the metrical assessment of cut marks and their correlation with the size of blade. The present study aims at verifying whether it is possible to reconstruct the metrical characteristics of the blade from the measurements taken from the lesion. Eleven blades with different thickness, height and shape were used for this study. A metallic structure was built, in order to simulate incised wounds and reiterate hits with the same energy. Perpendicular and angled tests were performed on fragments of pig femurs, in order to produce 110 lesions (10 for each blade). Depth, height and angle were measured and compared with metrical characteristics of each blade. Results showed a wide superimposition of metrical characteristics of width and angle of lesions regardless the type and the orientation of blade: for symmetric blades a high correlation index was observed between the depth of the lesion and the angle of the blade in perpendicular tests (0.89) and between the angle of lesion and the height of the blade in angled tests (-0.76); for asymmetric blades in both the tests a high correlation was observed between the angle of the blade and angle and width of the lesion (respectively 0.90 and 0.76 for perpendicular tests, and 0.80 and 0.90 for angled ones). This study provides interesting data concerning the interpretation of cutmarks on bone and suggests caution in assessing the size of weapons from the metrical measurements of lesions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Metrics for Analyzing Quantifiable Differentiation of Designs with Varying Integrity for Hardware Assurance

    DTIC Science & Technology

    2017-03-01

    proposed. Expected profiles can incorporate a level of overdesign. Finally, the Design Integrity measuring techniques are applied to five Test Article ...Inserted into Test System Table 2 presents the results of the analysis applied to each of the test article designs. Each of the domains are...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error

  12. Test Methods for Robot Agility in Manufacturing

    PubMed Central

    Downs, Anthony; Harrison, William; Schlenoff, Craig

    2017-01-01

    Purpose The paper aims to define and describe test methods and metrics to assess industrial robot system agility in both simulation and in reality. Design/methodology/approach The paper describes test methods and associated quantitative and qualitative metrics for assessing robot system efficiency and effectiveness which can then be used for the assessment of system agility. Findings The paper describes how the test methods were implemented in a simulation environment and real world environment. It also shows how the metrics are measured and assessed as they would be in a future competition. Practical Implications The test methods described in this paper will push forward the state of the art in software agility for manufacturing robots, allowing small and medium manufacturers to better utilize robotic systems. Originality / value The paper fulfills the identified need for standard test methods to measure and allow for improvement in software agility for manufacturing robots. PMID:28203034

  13. An Evaluation of the IntelliMetric[SM] Essay Scoring System

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine

    2006-01-01

    This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…

  14. Feasibility of Turing-Style Tests for Autonomous Aerial Vehicle "Intelligence"

    NASA Technical Reports Server (NTRS)

    Young, Larry A.

    2007-01-01

    A new approach is suggested to define and evaluate key metrics as to autonomous aerial vehicle performance. This approach entails the conceptual definition of a "Turing Test" for UAVs. Such a "UAV Turing test" would be conducted by means of mission simulations and/or tailored flight demonstrations of vehicles under the guidance of their autonomous system software. These autonomous vehicle mission simulations and flight demonstrations would also have to be benchmarked against missions "flown" with pilots/human-operators in the loop. In turn, scoring criteria for such testing could be based upon both quantitative mission success metrics (unique to each mission) and by turning to analog "handling quality" metrics similar to the well-known Cooper-Harper pilot ratings used for manned aircraft. Autonomous aerial vehicles would be considered to have successfully passed this "UAV Turing Test" if the aggregate mission success metrics and handling qualities for the autonomous aerial vehicle matched or exceeded the equivalent metrics for missions conducted with pilots/human-operators in the loop. Alternatively, an independent, knowledgeable observer could provide the "UAV Turing Test" ratings of whether a vehicle is autonomous or "piloted." This observer ideally would, in the more sophisticated mission simulations, also have the enhanced capability of being able to override the scripted mission scenario and instigate failure modes and change of flight profile/plans. If a majority of mission tasks are rated as "piloted" by the observer, when in reality the vehicle/simulation is fully- or semi- autonomously controlled, then the vehicle/simulation "passes" the "UAV Turing Test." In this regards, this second "UAV Turing Test" approach is more consistent with Turing s original "imitation game" proposal. The overall feasibility, and important considerations and limitations, of such an approach for judging/evaluating autonomous aerial vehicle "intelligence" will be discussed from a theoretical perspective.

  15. Digital geologic map database of the Nevada Test Site area, Nevada

    USGS Publications Warehouse

    Wahl, R.R.; Sawyer, D.A.; Minor, S.A.; Carr, M.D.; Cole, J.C.; Swadley, W.C.; Laczniak, R.J.; Warren, R.G.; Green, K.S.; Engle, C.M.

    1997-01-01

    Forty years of geologic investigations at the Nevada Test Site (NTS) have been digitized. These data include all geologic information that: (1) has been collected, and (2) can be represented on a map within the map borders at the map scale is included in the map digital coverages. The following coverages are included with this dataset: Coverage Type Description geolpoly Polygon Geologic outcrops geolflts line Fault traces geolatts Point Bedding attitudes, etc. geolcald line Caldera boundaries geollins line Interpreted lineaments geolmeta line Metamorphic gradients The above coverages are attributed with numeric values and interpreted information. The entity files documented below show the data associated with each coverage.

  16. Measuring β-diversity with species abundance data.

    PubMed

    Barwell, Louise J; Isaac, Nick J B; Kunin, William E

    2015-07-01

    In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B  = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  17. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  18. Testing of the Apollo 15 Metric Camera System.

    NASA Technical Reports Server (NTRS)

    Helmering, R. J.; Alspaugh, D. H.

    1972-01-01

    Description of tests conducted (1) to assess the quality of Apollo 15 Metric Camera System data and (2) to develop production procedures for total block reduction. Three strips of metric photography over the Hadley Rille area were selected for the tests. These photographs were utilized in a series of evaluation tests culminating in an orbitally constrained block triangulation solution. Results show that film deformations up to 25 and 5 microns are present in the mapping and stellar materials, respectively. Stellar reductions can provide mapping camera orientations with an accuracy that is consistent with the accuracies of other parameters in the triangulation solutions. Pointing accuracies of 4 to 10 microns can be expected for the mapping camera materials, depending on variations in resolution caused by changing sun angle conditions.

  19. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  20. Watchdog activity monitor (WAM) for use wth high coverage processor self-test

    NASA Technical Reports Server (NTRS)

    Tulpule, Bhalchandra R. (Inventor); Crosset, III, Richard W. (Inventor); Versailles, Richard E. (Inventor)

    1988-01-01

    A high fault coverage, instruction modeled self-test for a signal processor in a user environment is disclosed. The self-test executes a sequence of sub-tests and issues a state transition signal upon the execution of each sub-test. The self-test may be combined with a watchdog activity monitor (WAM) which provides a test-failure signal in the presence of a counted number of state transitions not agreeing with an expected number. An independent measure of time may be provided in the WAM to increase fault coverage by checking the processor's clock. Additionally, redundant processor systems are protected from inadvertent unsevering of a severed processor using a unique unsever arming technique and apparatus.

  1. A Comparison of Linking and Concurrent Calibration under the Graded Response Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…

  2. RELATIONSHIPS AMONG EXCEEDENCES OF CHEMICAL CRITERIA OR GUIDELINES, THE RESULTS OF AMBIENT TOXICITY TESTS AND COMMUNITY METRICS IN AQUATIC ECOSYSTEMS (FINAL)

    EPA Science Inventory

    The EPA document, Relationships Among Exceedances of Chemical Criteria or Guidelines, the Results of Ambient Toxicity Tests, and Community Metrics in Aquatic Ecosystems, presents two studies where the three general approaches for the ecological assessment of contaminant ex...

  3. Field Testing Vocational Education Metric Modules. Final Report.

    ERIC Educational Resources Information Center

    Oldsen, Carl F.

    A project was conducted for the following purposes: (1) to develop a workshop training package to prepare vocational education teachers to use vocational subject-specific modules; (2) to train those teachers to use the workshop package; (3) to conduct field tests of the metric modules with experimental and control groups; (4) to analyze, describe,…

  4. Index of cyber integrity

    NASA Astrophysics Data System (ADS)

    Anderson, Gustave

    2014-05-01

    Unfortunately, there is no metric, nor set of metrics, that are both general enough to encompass all possible types of applications yet specific enough to capture the application and attack specific details. As a result we are left with ad-hoc methods for generating evaluations of the security of our systems. Current state of the art methods for evaluating the security of systems include penetration testing and cyber evaluation tests. For these evaluations, security professionals simulate an attack from malicious outsiders and malicious insiders. These evaluations are very productive and are able to discover potential vulnerabilities resulting from improper system configuration, hardware and software flaws, or operational weaknesses. We therefore propose the index of cyber integrity (ICI), which is modeled after the index of biological integrity (IBI) to provide a holistic measure of the health of a system under test in a cyber-environment. The ICI provides a broad base measure through a collection of application and system specific metrics. In this paper, following the example of the IBI, we demonstrate how a multi-metric index may be used as a holistic measure of the health of a system under test in a cyber-environment.

  5. Development of Cardiovascular and Neurodevelopmental Metrics as Sublethal Endpoints for the Fish Embryo Toxicity Test.

    PubMed

    Krzykwa, Julie C; Olivas, Alexis; Jeffries, Marlo K Sellin

    2018-06-19

    The fathead minnow fish embryo toxicity (FET) test has been proposed as a more humane alternative to current toxicity testing methods, as younger organisms are thought to experience less distress during toxicant exposure. However, the FET test protocol does not include endpoints that allow for the prediction of sublethal adverse outcomes, limiting its utility relative to other test types. Researchers have proposed the development of sublethal endpoints for the FET test to increase its utility. The present study 1) developed methods for previously unmeasured sublethal metrics in fathead minnows (i.e., spontaneous contraction frequency and heart rate) and 2) investigated the responsiveness of several sublethal endpoints related to growth (wet weight, length, and growth-related gene expression), neurodevelopment (spontaneous contraction frequency, and neurodevelopmental gene expression), and cardiovascular function and development (pericardial area, eye size and cardiovascular related gene expression) as additional FET test metrics using the model toxicant 3,4-dichloroaniline. Of the growth, neurological and cardiovascular endpoints measured, length, eye size and pericardial area were found to more responsive than the other endpoints, respectively. Future studies linking alterations in these endpoints to longer-term adverse impacts are needed to fully evaluate the predictive power of these metrics in chemical and whole effluent toxicity testing. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Predicting the impact of insecticide-treated bed nets on malaria transmission: the devil is in the detail.

    PubMed

    Gu, Weidong; Novak, Robert J

    2009-11-16

    Insecticide-treated bed nets (ITNs), including long-lasting insecticidal nets (LLINs), play a primary role in global campaigns to roll back malaria in tropical Africa. Effectiveness of treated nets depends on direct impacts on individual mosquitoes including killing and excite-repellency, which vary considerably among vector species due to variations in host-seeking behaviours. While monitoring and evaluation programmes of ITNs have focuses on morbidity and all-cause mortality in humans, local entomological context receives little attention. Without knowing the dynamics of local vector species and their responses to treated nets, it is difficult to predict clinical outcomes when ITN applications are scaled up across African continent. Sound model frameworks incorporating intricate interactions between mosquitoes and treated nets are needed to develop the predictive capacity for scale-up applications of ITNs. An established agent-based model was extended to incorporate the direct outcomes, e.g. killing and avoidance, of individual mosquitoes exposing to ITNs in a hypothetical village setting with 50 houses and 90 aquatic habitats. Individual mosquitoes were tracked throughout the life cycle across the landscape. Four levels of coverage, i.e. 40, 60, 80 and 100%, were applied at the household level with treated houses having only one bed net. By using Latin hypercube sampling scheme, parameters governing killing, diverting and personal protection of net users were evaluated for their relative roles in containing mosquito populations, entomological inoculation rates (EIRs) and malaria incidence. There were substantial gaps in coverage between households and individual persons, and 100% household coverage resulted in circa 50% coverage of the population. The results show that applications of ITNs could give rise to varying impacts on population-level metrics depending on values of parameters governing interactions of mosquitoes and treated nets at the individual level. The most significant factor in determining effectiveness was killing capability of treated nets. Strong excito-repellent effect of impregnated nets might lead to higher risk exposure to non-bed net users. With variabilities of vector mosquitoes in host-seeking behaviours and the responses to treated nets, it is anticipated that scale-up applications of INTs might produce varying degrees of success dependent on local entomological and epidemiological contexts. This study highlights that increased ITN coverage led to significant reduction in risk exposure and malaria incidence only when treated nets yielded high killing effects. It is necessary to test efficacy of treated nets on local dominant vector mosquitoes, at least in laboratory, for monitoring and evaluation of ITN programmes.

  7. Literacy is Just Reading and Writing, Isn't It? The Ontario Secondary School Literacy Test and Its Press Coverage

    ERIC Educational Resources Information Center

    Pinto, Laura; Boler, Megan; Norris, Trevor

    2007-01-01

    This article examines how the public discourse of print news media defines and shapes the representation of the Ontario Secondary School Literacy Test (OSSLT) based on coverage in three primary newspapers between 1998 and 2004. The data were analysed using qualitative and quantitative measures to identify types of coverage, themes, and…

  8. Spatial Coverage Planning and Optimization for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Gaines, Daniel M.; Estlin, Tara; Chouinard, Caroline

    2008-01-01

    We are developing onboard planning and scheduling technology to enable in situ robotic explorers, such as rovers and aerobots, to more effectively assist scientists in planetary exploration. In our current work, we are focusing on situations in which the robot is exploring large geographical features such as craters, channels or regional boundaries. In to develop valid and high quality plans, the robot must take into account a range of scientific and engineering constraints and preferences. We have developed a system that incorporates multiobjective optimization and planning allowing the robot to generate high quality mission operations plans that respect resource limitations and mission constraints while attempting to maximize science and engineering objectives. An important scientific objective for the exploration of geological features is selecting observations that spatially cover an area of interest. We have developed a metric to enable an in situ explorer to reason about and track the spatial coverage quality of a plan. We describe this technique and show how it is combined in the overall multiobjective optimization and planning algorithm.

  9. Validation of Metrics as Error Predictors

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  10. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  11. Circular geodesics of naked singularities in the Kehagias-Sfetsos metric of Hořava's gravity

    NASA Astrophysics Data System (ADS)

    Vieira, Ronaldo S. S.; Schee, Jan; Kluźniak, Włodek; Stuchlík, Zdeněk; Abramowicz, Marek

    2014-07-01

    We discuss photon and test-particle orbits in the Kehagias-Sfetsos (KS) metric of Hořava's gravity. For any value of the Hořava parameter ω, there are values of the gravitational mass M for which the metric describes a naked singularity, and this is always accompanied by a vacuum "antigravity sphere" on whose surface a test particle can remain at rest (in a zero angular momentum geodesic), and inside which no circular geodesics exist. The observational appearance of an accreting KS naked singularity in a binary system would be that of a quasistatic spherical fluid shell surrounded by an accretion disk, whose properties depend on the value of M, but are always very different from accretion disks familiar from the Kerr-metric solutions. The properties of the corresponding circular orbits are qualitatively similar to those of the Reissner-Nordström naked singularities. When event horizons are present, the orbits outside the Kehagias-Sfetsos black hole are qualitatively similar to those of the Schwarzschild metric.

  12. Determining optimal parameters of the self-referent encoding task: A large-scale examination of self-referent cognition and depression.

    PubMed

    Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G

    2018-06-07

    Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. The Application of Time-Frequency Methods to HUMS

    NASA Technical Reports Server (NTRS)

    Pryor, Anna H.; Mosher, Marianne; Lewicki, David G.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper reports the study of four time-frequency transforms applied to vibration signals and presents a new metric for comparing them for fault detection. The four methods to be described and compared are the Short Time Frequency Transform (STFT), the Choi-Williams Distribution (WV-CW), the Continuous Wavelet Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels, are analyzed using these methods. The new metric for automatic fault detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the methods on this data set. Analysis with the CWT detects mechanical problems with the test rig not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic fault detection and to develop methods of setting the threshold for the metric.

  14. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  15. End-to-End Trade-space Analysis for Designing Constellation Missions

    NASA Astrophysics Data System (ADS)

    LeMoigne, J.; Dabney, P.; Foreman, V.; Grogan, P.; Hache, S.; Holland, M. P.; Hughes, S. P.; Nag, S.; Siddiqi, A.

    2017-12-01

    Multipoint measurement missions can provide a significant advancement in science return and this science interest coupled with many recent technological advances are driving a growing trend in exploring distributed architectures for future NASA missions. Distributed Spacecraft Missions (DSMs) leverage multiple spacecraft to achieve one or more common goals. In particular, a constellation is the most general form of DSM with two or more spacecraft placed into specific orbit(s) for the purpose of serving a common objective (e.g., CYGNSS). Because a DSM architectural trade-space includes both monolithic and distributed design variables, DSM optimization is a large and complex problem with multiple conflicting objectives. Over the last two years, our team has been developing a Trade-space Analysis Tool for Constellations (TAT-C), implemented in common programming languages for pre-Phase A constellation mission analysis. By evaluating alternative mission architectures, TAT-C seeks to minimize cost and maximize performance for pre-defined science goals. This presentation will describe the overall architecture of TAT-C including: a User Interface (UI) at several levels of details and user expertise; Trade-space Search Requests that are created from the Science requirements gathered by the UI and validated by a Knowledge Base; a Knowledge Base to compare the current requests to prior mission concepts to potentially prune the trade-space; a Trade-space Search Iterator which, with inputs from the Knowledge Base, and, in collaboration with the Orbit & Coverage, Reduction & Metrics, and Cost& Risk modules, generates multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, modeling orbits to balance accuracy and performance. The current version includes uniform and non-uniform Walker constellations as well as Ad-Hoc and precessing constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The current GUI automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost. The end-to-end system will be demonstrated as part of the presentation.

  16. Transferring Error Characteristics of Satellite Rainfall Data from Ground Validation (gauged) into Non-ground Validation (ungauged)

    NASA Astrophysics Data System (ADS)

    Tang, L.; Hossain, F.

    2009-12-01

    Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.

  17. End-to-End Trade-Space Analysis for Designing Constellation

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Dabney, Philip; Foreman, Veronica; Grogan, Paul T.; Hache, Sigfried; Holland, Matthew; Hughes, Steven; Nag, Sreeja; Siddiqi, Afreen

    2017-01-01

    Multipoint measurement missions can provide a significant advancement in science return and this science interest coupled with as many recent technological advances are driving a growing trend in exploring distributed architectures for future NASA missions. Distributed Spacecraft Missions (DSMs) leverage multiple spacecraft to achieve one or more common goals. In particular, a constellation is the most general form of DSM with two or more spacecraft placed into specific orbit(s) for the purpose of serving a common objective (e.g., CYGNSS). Because a DSM architectural trade-space includes both monolithic and distributed design variables, DSM optimization is a large and complex problem with multiple conflicting objectives. Over the last two years, our team has been developing a Trade-space Analysis Tool for Constellations (TAT-C), implemented in common programming languages for pre-Phase A constellation mission analysis. By evaluating alternative mission architectures, TAT-C seeks to minimize cost and maximize performance for pre-defined science goals. This presentation will describe the overall architecture of TAT-C including: a User Interface (UI) at several levels of details and user expertise; Trade-space Search Requests that are created from the Science requirements gathered by the UI and validated by a Knowledge Base; a Knowledge Base to compare the current requests to prior mission concepts to potentially prune the trade-space; a Trade-space Search Iterator which, with inputs from the Knowledge Base, and, in collaboration with the Orbit & Coverage, Reduction & Metrics, and Cost& Risk modules, generates multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, modeling orbits to balance accuracy and performance. The current version includes uniform and non-uniform Walker constellations as well as Ad-Hoc and precessing constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The current GUI automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost. The end-to-end system will be demonstrated as part of the presentation.

  18. How Important Is a Reproducible Breath Hold for Deep Inspiration Breath Hold Breast Radiation Therapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiant, David, E-mail: David.wiant@conehealth.com; Wentworth, Stacy; Liu, Han

    Purpose: Deep inspiration breath hold (DIBH) for left-sided breast cancer has been shown to reduce heart dose. Surface imaging helps to ensure accurate breast positioning, but it does not guarantee a reproducible breath hold (BH) at DIBH treatments. We examine the effects of variable BH positions for DIBH treatments. Methods and Materials: Twenty-five patients who underwent free breathing (FB) and DIBH scans were reviewed. Four plans were created for each patient: FB, DIBH, FB-DIBH (the DIBH plans were copied to the FB images and recalculated, and image registration was based on breast tissue), and P-DIBH (a partial BH with themore » heart shifted midway between the FB and DIBH positions). The FB-DIBH plans give a “worst-case” scenario for surface imaging DIBH, where the breast is aligned by surface imaging but the patient is not holding their breath. Kolmogorov-Smirnov tests were used to compare the dose metrics. Results: The DIBH plans gave lower heart dose and comparable breast coverage versus FB in all cases. The FB-DIBH plans showed no significant difference versus FB plans for breast coverage, mean heart dose, or maximum heart dose (P≥.10). The mean heart dose differed between FB-DIBH and FB by <2 Gy for all cases, and the maximum heart dose differed by <2 Gy for 21 cases. The P-DIBH plans showed significantly lower mean heart dose than FB (P<.01). The mean heart doses for the P-DIBH plans were« less

  19. Testing general relativity's no-hair theorem with x-ray observations of black holes

    NASA Astrophysics Data System (ADS)

    Hoormann, Janie K.; Beheshtipour, Banafsheh; Krawczynski, Henric

    2016-02-01

    Despite its success in the weak gravity regime, general relativity (GR) has yet to be verified in the regime of strong gravity. In this paper, we present the results of detailed ray-tracing simulations aiming at clarifying if the combined information from x-ray spectroscopy, timing, and polarization observations of stellar mass and supermassive black holes can be used to test GR's no-hair theorem. The latter states that stationary astrophysical black holes are described by the Kerr family of metrics, with the black hole mass and spin being the only free parameters. We use four "non-Kerr metrics," some phenomenological in nature and others motivated by alternative theories of gravity, and study the observational signatures of deviations from the Kerr metric. Particular attention is given to the case when all the metrics are set to give the same innermost stable circular orbit in quasi-Boyer-Lindquist coordinates. We give a detailed discussion of similarities and differences of the observational signatures predicted for black holes in the Kerr metric and the non-Kerr metrics. We emphasize that even though some regions of the parameter space are nearly degenerate even when combining the information from all observational channels, x-ray observations of very rapidly spinning black holes can be used to exclude large regions of the parameter space of the alternative metrics. Although it proves difficult to distinguish between the Kerr and non-Kerr metrics for some portions of the parameter space, the observations of very rapidly spinning black holes like Cyg X-1 can be used to rule out large regions for several black hole metrics.

  20. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  1. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  2. EXAMINING EVIDENCE IN U.S. PAYER COVERAGE POLICIES FOR MULTI-GENE PANELS AND SEQUENCING TESTS

    PubMed Central

    Chambers, James D.; Saret, Cayla J.; Anderson, Jordan E.; Deverka, Patricia A.; Douglas, Michael P.; Phillips, Kathryn A.

    2017-01-01

    Objectives The aim of this study was to examine the evidence payers cited in their coverage policies for multi-gene panels and sequencing tests (panels), and to compare these findings with the evidence payers cited in their coverage policies for other types of medical interventions. Methods We used the University of California at San Francisco TRANSPERS Payer Coverage Registry to identify coverage policies for panels issued by five of the largest US private payers. We reviewed each policy and categorized the evidence cited within as: clinical studies, systematic reviews, technology assessments, cost-effectiveness analyses (CEAs), budget impact studies, and clinical guidelines. We compared the evidence cited in these coverage policies for panels with the evidence cited in policies for other intervention types (pharmaceuticals, medical devices, diagnostic tests and imaging, and surgical interventions) as reported in a previous study. Results Fifty-five coverage policies for panels were included. On average, payers cited clinical guidelines in 84 percent of their coverage policies (range, 73–100 percent), clinical studies in 69 percent (50–87 percent), technology assessments 47 percent (33–86 percent), systematic reviews or meta-analyses 31 percent (7–71 percent), and CEAs 5 percent (0–7 percent). No payers cited budget impact studies in their policies. Payers less often cited clinical studies, systematic reviews, technology assessments, and CEAs in their coverage policies for panels than in their policies for other intervention types. Payers cited clinical guidelines in a comparable proportion of policies for panels and other technology types. Conclusions Payers in our sample less often cited clinical studies and other evidence types in their coverage policies for panels than they did in their coverage policies for other types of medical interventions. PMID:29065945

  3. EXAMINING EVIDENCE IN U.S. PAYER COVERAGE POLICIES FOR MULTI-GENE PANELS AND SEQUENCING TESTS.

    PubMed

    Chambers, James D; Saret, Cayla J; Anderson, Jordan E; Deverka, Patricia A; Douglas, Michael P; Phillips, Kathryn A

    2017-01-01

    The aim of this study was to examine the evidence payers cited in their coverage policies for multi-gene panels and sequencing tests (panels), and to compare these findings with the evidence payers cited in their coverage policies for other types of medical interventions. We used the University of California at San Francisco TRANSPERS Payer Coverage Registry to identify coverage policies for panels issued by five of the largest US private payers. We reviewed each policy and categorized the evidence cited within as: clinical studies, systematic reviews, technology assessments, cost-effectiveness analyses (CEAs), budget impact studies, and clinical guidelines. We compared the evidence cited in these coverage policies for panels with the evidence cited in policies for other intervention types (pharmaceuticals, medical devices, diagnostic tests and imaging, and surgical interventions) as reported in a previous study. Fifty-five coverage policies for panels were included. On average, payers cited clinical guidelines in 84 percent of their coverage policies (range, 73-100 percent), clinical studies in 69 percent (50-87 percent), technology assessments 47 percent (33-86 percent), systematic reviews or meta-analyses 31 percent (7-71 percent), and CEAs 5 percent (0-7 percent). No payers cited budget impact studies in their policies. Payers less often cited clinical studies, systematic reviews, technology assessments, and CEAs in their coverage policies for panels than in their policies for other intervention types. Payers cited clinical guidelines in a comparable proportion of policies for panels and other technology types. Payers in our sample less often cited clinical studies and other evidence types in their coverage policies for panels than they did in their coverage policies for other types of medical interventions.

  4. Non-Intrusive Load Monitoring Assessment: Literature Review and Laboratory Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butner, R. Scott; Reid, Douglas J.; Hoffman, Michael G.

    2013-07-01

    To evaluate the accuracy of NILM technologies, a literature review was conducted to identify any test protocols or standardized testing approaches currently in use. The literature review indicated that no consistent conventions were currently in place for measuring the accuracy of these technologies. Consequently, PNNL developed a testing protocol and metrics to provide the basis for quantifying and analyzing the accuracy of commercially available NILM technologies. This report discusses the results of the literature review and the proposed test protocol and metrics in more detail.

  5. Monitoring spatial variations in soil organic carbon using remote sensing and geographic information systems

    NASA Astrophysics Data System (ADS)

    Jaber, Salahuddin M.

    Soil organic carbon (SOC) sequestration is a component of larger strategies to control the accumulation of greenhouse gases that may be causing global warming. To implement this approach, it is necessary to improve the methods of measuring SOC content. Among these methods are indirect remote sensing and geographic information systems (GIS) techniques that are required to provide non-intrusive, low cost, and spatially continuous information that cover large areas on a repetitive basis. The main goal of this study is to evaluate the effects of using Hyperion hyperspectral data on improving the existing remote sensing and GIS-based methodologies for rapidly, efficiently, and accurately measuring SOC content on farmland. The study area is Big Creek Watershed (BCW) in Southern Illinois. The methodology consists of compiling a GIS database (consisting of remote sensing and soil variables) for 303 composite soil samples collected from representative pixels along the Hyperion coverage area of the watershed. Stepwise procedures were used to calibrate and validate linear multiple regression models where SOC was regarded as the response and the other remote sensing and soil variables as the predictors. Two models were selected. The first was the best all variables model and the second was the best only raster variables model. Map algebra was implemented to extrapolate the best only raster variables model and produce a SOC map for the BGW. This study concluded that Hyperion data marginally improved the predictability of the existing SOC statistical models based on multispectral satellite remote sensing sensors with correlation coefficient of 0.37 and root mean square error of 3.19 metric tons/hectare to a 15-cm depth. The total SOC pool of the study area is about 225,232 metric tons to 15-cm depth. The nonforested wetlands contained the highest SOC density (34.3 metric tons/hectare/15cm) with total SOC content of about 2,003.5 metric tons to 15-cm depth, where croplands had the lowest SOC density (21.6 metric tons/hectare/15cm) with total SOC content of about 44,571.2 metric tons to 15-cm depth.

  6. SU-C-BRB-05: Determining the Adequacy of Auto-Contouring Via Probabilistic Assessment of Ensuing Treatment Plan Metrics in Comparison with Manual Contours

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nourzadeh, H; Watkins, W; Siebers, J

    Purpose: To determine if auto-contour and manual-contour—based plans differ when evaluated with respect to probabilistic coverage metrics and biological model endpoints for prostate IMRT. Methods: Manual and auto-contours were created for 149 CT image sets acquired from 16 unique prostate patients. A single physician manually contoured all images. Auto-contouring was completed utilizing Pinnacle’s Smart Probabilistic Image Contouring Engine (SPICE). For each CT, three different 78 Gy/39 fraction 7-beam IMRT plans are created; PD with drawn ROIs, PAS with auto-contoured ROIs, and PM with auto-contoured OARs with the manually drawn target. For each plan, 1000 virtual treatment simulations with different sampledmore » systematic errors for each simulation and a different sampled random error for each fraction were performed using our in-house GPU-accelerated robustness analyzer tool which reports the statistical probability of achieving dose-volume metrics, NTCP, TCP, and the probability of achieving the optimization criteria for both auto-contoured (AS) and manually drawn (D) ROIs. Metrics are reported for all possible cross-evaluation pairs of ROI types (AS,D) and planning scenarios (PD,PAS,PM). Bhattacharyya coefficient (BC) is calculated to measure the PDF similarities for the dose-volume metric, NTCP, TCP, and objectives with respect to the manually drawn contour evaluated on base plan (D-PD). Results: We observe high BC values (BC≥0.94) for all OAR objectives. BC values of max dose objective on CTV also signify high resemblance (BC≥0.93) between the distributions. On the other hand, BC values for CTV’s D95 and Dmin objectives are small for AS-PM, AS-PD. NTCP distributions are similar across all evaluation pairs, while TCP distributions of AS-PM, AS-PD sustain variations up to %6 compared to other evaluated pairs. Conclusion: No significant probabilistic differences are observed in the metrics when auto-contoured OARs are used. The prostate auto-contour needs improvement to achieve clinically equivalent plans.« less

  7. Flexible risk metrics for identifying and monitoring conservation-priority species

    USGS Publications Warehouse

    Stanton, Jessica C.; Semmens, Brice X.; McKann, Patrick C.; Will, Tom; Thogmartin, Wayne E.

    2016-01-01

    Region-specific conservation programs should have objective, reliable metrics for species prioritization and progress evaluation that are customizable to the goals of a program, easy to comprehend and communicate, and standardized across time. Regional programs may have vastly different goals, spatial coverage, or management agendas, and one-size-fits-all schemes may not always be the best approach. We propose a quantitative and objective framework for generating metrics for prioritizing species that is straightforward to implement and update, customizable to different spatial resolutions, and based on readily available time-series data. This framework is also well-suited to handling missing-data and observer error. We demonstrate this approach using North American Breeding Bird Survey (NABBS) data to identify conservation priority species from a list of over 300 landbirds across 33 bird conservation regions (BCRs). To highlight the flexibility of the framework for different management goals and timeframes we calculate two different metrics. The first identifies species that may be inadequately monitored by NABBS protocols in the near future (TMT, time to monitoring threshold), and the other identifies species likely to decline significantly in the near future based on recent trends (TPD, time to percent decline). Within the individual BCRs we found up to 45% (mean 28%) of the species analyzed had overall declining population trajectories, which could result in up to 37 species declining below a minimum NABBS monitoring threshold in at least one currently occupied BCR within the next 50 years. Additionally, up to 26% (mean 8%) of the species analyzed within the individual BCRs may decline by 30% within the next decade. Conservation workers interested in conserving avian diversity and abundance within these BCRs can use these metrics to plan alternative monitoring schemes or highlight the urgency of those populations experiencing the fastest declines. However, this framework is adaptable to many taxa besides birds where abundance time-series data are available.

  8. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be used as an independent standardized procedure for detector performance assessment. © 2016 The Authors.

  9. Evaluation of cassette‐based digital radiography detectors using standardized image quality metrics: AAPM TG‐150 Draft Image Detector Tests

    PubMed Central

    Greene, Travis C.; Nishino, Thomas K.; Willis, Charles E.

    2016-01-01

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region‐of‐interest (ROI)‐based techniques to measure nonuniformity, minimum signal‐to‐noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX‐1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG‐150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG‐150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG‐150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG‐150 tests can be used as an independent standardized procedure for detector performance assessment. PACS number(s): 87.57.‐s, 87.57.C PMID:27685102

  10. Comparative Simulation Study of Glucose Control Methods Designed for Use in the Intensive Care Unit Setting via a Novel Controller Scoring Metric.

    PubMed

    DeJournett, Jeremy; DeJournett, Leon

    2017-11-01

    Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (<40 mg/dL), mild hypoglycemia (40-69 mg/dL), normoglycemia (70-140 mg/dL), hyperglycemia (>140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.

  11. Your Medicare Coverage: Durable Medical Equipment (DME) Coverage

    MedlinePlus

    ... test, item, or service covered? Go Durable medical equipment (DME) coverage How often is it covered? Medicare ... B (Medical Insurance) covers medically necessary durable medical equipment (DME) that your doctor prescribes for use in ...

  12. Global spatially explicit CO2 emission metrics at 0.25° horizontal resolution for forest bioenergy

    NASA Astrophysics Data System (ADS)

    Cherubini, F.

    2015-12-01

    Bioenergy is the most important renewable energy option in studies designed to align with future RCP projections, reaching approximately 250 EJ/yr in RCP2.6, 145 EJ/yr in RCP4.5 and 180 EJ/yr in RCP8.5 by the end of the 21st century. However, many questions enveloping the direct carbon cycle and climate response to bioenergy remain partially unexplored. Bioenergy systems are largely assessed under the default climate neutrality assumption and the time lag between CO2 emissions from biomass combustion and CO2 uptake by vegetation is usually ignored. Emission metrics of CO2 from forest bioenergy are only available on a case-specific basis and their quantification requires processing of a wide spectrum of modelled or observed local climate and forest conditions. On the other hand, emission metrics are widely used to aggregate climate impacts of greenhouse gases to common units such as CO2-equivalents (CO2-eq.), but a spatially explicit analysis of emission metrics with global forest coverage is today lacking. Examples of emission metrics include the global warming potential (GWP), the global temperature change potential (GTP) and the absolute sustained emission temperature (aSET). Here, we couple a global forest model, a heterotrophic respiration model, and a global climate model to produce global spatially explicit emission metrics for CO2 emissions from forest bioenergy. We show their applications to global emissions in 2015 and until 2100 under the different RCP scenarios. We obtain global average values of 0.49 ± 0.03 kgCO2-eq. kgCO2-1 (mean ± standard deviation), 0.05 ± 0.05 kgCO2-eq. kgCO2-1, and 2.14·10-14 ± 0.11·10-14 °C (kg yr-1)-1, and 2.14·10-14 ± 0.11·10-14 °C (kg yr-1)-1 for GWP, GTP and aSET, respectively. We also present results aggregated at a grid, national and continental level. The metrics are found to correlate with the site-specific turnover times and local climate variables like annual mean temperature and precipitation. Simplified equations are derived to infer metric values from the turnover time of the biomass feedstock and the fraction of forest residues left on site after harvest. Our results provide a basis for assessing CO2 emissions from forest bioenergy under different indicators and across various spatial and temporal scales.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angers, Crystal Plume; Bottema, Ryan; Buckley, Les

    Purpose: Treatment unit uptime statistics are typically used to monitor radiation equipment performance. The Ottawa Hospital Cancer Centre has introduced the use of Quality Control (QC) test success as a quality indicator for equipment performance and overall health of the equipment QC program. Methods: Implemented in 2012, QATrack+ is used to record and monitor over 1100 routine machine QC tests each month for 20 treatment and imaging units ( http://qatrackplus.com/ ). Using an SQL (structured query language) script, automated queries of the QATrack+ database are used to generate program metrics such as the number of QC tests executed and themore » percentage of tests passing, at tolerance or at action. These metrics are compared against machine uptime statistics already reported within the program. Results: Program metrics for 2015 show good correlation between pass rate of QC tests and uptime for a given machine. For the nine conventional linacs, the QC test success rate was consistently greater than 97%. The corresponding uptimes for these units are better than 98%. Machines that consistently show higher failure or tolerance rates in the QC tests have lower uptimes. This points to either poor machine performance requiring corrective action or to problems with the QC program. Conclusions: QATrack+ significantly improves the organization of QC data but can also aid in overall equipment management. Complimenting machine uptime statistics with QC test metrics provides a more complete picture of overall machine performance and can be used to identify areas of improvement in the machine service and QC programs.« less

  14. Two Birds With One Stone: Estimating Population Vaccination Coverage From a Test-negative Vaccine Effectiveness Case-control Study.

    PubMed

    Doll, Margaret K; Morrison, Kathryn T; Buckeridge, David L; Quach, Caroline

    2016-10-15

    Vaccination program evaluation includes assessment of vaccine uptake and direct vaccine effectiveness (VE). Often examined separately, we propose a design to estimate rotavirus vaccination coverage using controls from a rotavirus VE test-negative case-control study and to examine coverage following implementation of the Quebec, Canada, rotavirus vaccination program. We present our assumptions for using these data as a proxy for coverage in the general population, explore effects of diagnostic accuracy on coverage estimates via simulations, and validate estimates with an external source. We found 79.0% (95% confidence interval, 74.3%, 83.0%) ≥2-dose rotavirus coverage among participants eligible for publicly funded vaccination. No differences were detected between study and external coverage estimates. Simulations revealed minimal bias in estimates with high diagnostic sensitivity and specificity. We conclude that controls from a VE case-control study may be a valuable resource of coverage information when reasonable assumptions can be made for estimate generalizability; high rotavirus coverage demonstrates success of the Quebec program. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  15. Evaluation of Automatically Quantified Foveal Avascular Zone Metrics for Diagnosis of Diabetic Retinopathy Using Optical Coherence Tomography Angiography.

    PubMed

    Lu, Yansha; Simonett, Joseph M; Wang, Jie; Zhang, Miao; Hwang, Thomas; Hagag, Ahmed M; Huang, David; Li, Dengwang; Jia, Yali

    2018-05-01

    To describe an automated algorithm to quantify the foveal avascular zone (FAZ), using optical coherence tomography angiography (OCTA), and to compare its performance for diagnosis of diabetic retinopathy (DR) and association with best-corrected visual acuity (BCVA) to that of extrafoveal avascular area (EAA). We obtained 3 × 3-mm macular OCTA scans in diabetic patients with various levels of DR and healthy controls. An algorithm based on a generalized gradient vector flow (GGVF) snake model detected the FAZ, and metrics assessing FAZ size and irregularity were calculated. We compared the automated FAZ segmentation to manual delineation and tested the within-visit repeatability of FAZ metrics. The correlations of two conventional FAZ metrics, two novel FAZ metrics, and EAA with DR severity and BCVA, as determined by Early Treatment Diabetic Retinopathy Study (ETDRS) charts, were assessed. Sixty-six eyes from 66 diabetic patients and 19 control eyes from 19 healthy participants were included. The agreement between manual and automated FAZ delineation had a Jaccard index > 0.82, and the repeatability of automated FAZ detection was excellent in eyes at all levels of DR severity. FAZ metrics that incorporated both FAZ size and shape irregularity had the strongest correlation with clinical DR grade and BCVA. Of all the tested OCTA metrics, EAA had the greatest sensitivity in differentiating diabetic eyes without clinical evidence of retinopathy, mild to moderate nonproliferative DR (NPDR), and severe NPDR to proliferative DR from healthy controls. The GGVF snake algorithm tested in this study can accurately and reliably detect the FAZ, using OCTA data at all DR severity grades, and may be used to obtain clinically useful information from OCTA data regarding macular ischemia in patients with diabetes. While FAZ metrics can provide clinically useful information regarding macular ischemia, and possibly visual acuity potential, EAA measurements may be a better biomarker for DR.

  16. Software metrics: The key to quality software on the NCC project

    NASA Technical Reports Server (NTRS)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  17. Initial Ada components evaluation

    NASA Technical Reports Server (NTRS)

    Moebes, Travis

    1989-01-01

    The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.

  18. 12 CFR Supplement I to Part 203 - Staff Commentary

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... residences elsewhere. 2(e) Financial institution. 1. General. An institution that met the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its assets fall below the... institution that did not meet the coverage test for a given year, and then meets the test in the succeeding...

  19. 12 CFR Supplement I to Part 203 - Staff Commentary

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... residences elsewhere. 2(e) Financial institution. 1. General. An institution that met the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its assets fall below the... institution that did not meet the coverage test for a given year, and then meets the test in the succeeding...

  20. 12 CFR Supplement I to Part 203 - Staff Commentary

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... residences elsewhere. 2(e) Financial institution. 1. General. An institution that met the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its assets fall below the... institution that did not meet the coverage test for a given year, and then meets the test in the succeeding...

  1. 12 CFR Supplement I to Part 1003 - Staff Commentary

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its... 3. Similarly, an institution that did not meet the coverage test for a given year, and then meets the test in the succeeding year, begins collecting HMDA data in the calendar year following the year...

  2. 12 CFR Supplement I to Part 1003 - Staff Commentary

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its... 3. Similarly, an institution that did not meet the coverage test for a given year, and then meets the test in the succeeding year, begins collecting HMDA data in the calendar year following the year...

  3. 12 CFR Supplement I to Part 203 - Staff Commentary

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... residences elsewhere. 2(e) Financial institution. 1. General. An institution that met the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its assets fall below the... institution that did not meet the coverage test for a given year, and then meets the test in the succeeding...

  4. 12 CFR Supplement I to Part 203 - Staff Commentary

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... residences elsewhere. 2(e) Financial institution. 1. General. An institution that met the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its assets fall below the... institution that did not meet the coverage test for a given year, and then meets the test in the succeeding...

  5. Gravitation theory - Empirical status from solar system experiments.

    NASA Technical Reports Server (NTRS)

    Nordtvedt, K. L., Jr.

    1972-01-01

    Review of historical and recent experiments which speak in favor of a post-Newtonian relativistic gravitational theory. The topics include the foundational experiments, metric theories of gravity, experiments designed to differentiate among the metric theories, and tests of Machian concepts of gravity. It is shown that the metric field for any metric theory can be specified by a series of potential terms with several parameters. It is pointed out that empirical results available up to date yield values of the parameters which are consistent with the prediction of Einstein's general relativity.

  6. Applying Sigma Metrics to Reduce Outliers.

    PubMed

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M.; Rearden, Bradley T.

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  8. Expanded Enlistment Eligibility Metrics (EEEM): Recommendations on a Non-Cognitive Screen for New Soldier Selection

    DTIC Science & Technology

    2010-07-01

    applicants and is pursing further research on the WPA. An operational test and evaluation ( IOT &E) has been initiated to evaluate the new screen...initial operational test and evaluation ( IOT &E) starting in fall 2009. vii EXPANDED ENLISTMENT ELIGIBILITY METRICS (EEEM): RECOMMENDATIONS ON A NON...Evaluation of a Performance Screen for IOT &E ..................................... 49 Approach

  9. Multi-mode evaluation of power-maximizing cross-flow turbine controllers

    DOE PAGES

    Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James; ...

    2017-09-21

    A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less

  10. Multi-mode evaluation of power-maximizing cross-flow turbine controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James

    A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less

  11. Graph Theoretical Analysis of Functional Brain Networks: Test-Retest Evaluation on Short- and Long-Term Resting-State Functional MRI Data

    PubMed Central

    Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong

    2011-01-01

    Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285

  12. 40 CFR 63.606 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g (453,600 mg/lb). (2) Method... fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi = concentration of total fluorides from... Where: Mp = total mass flow rate of phosphorus-bearing feed, metric ton/hr (ton/hr). Rp = P2O5 content...

  13. 40 CFR 63.606 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g (453,600 mg/lb). (2) Method... fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi = concentration of total fluorides from... Where: Mp = total mass flow rate of phosphorus-bearing feed, metric ton/hr (ton/hr). Rp = P2O5 content...

  14. 40 CFR 63.626 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... affected facility. P = equivalent P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g... P2O5 stored, metric tons (tons). K = conversion factor, 1000 mg/g (453,600 mg/lb). (ii) Method 13A or... Where: E = emission rate of total fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi...

  15. 40 CFR 63.606 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g (453,600 mg/lb). (2) Method... fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi = concentration of total fluorides from... Where: Mp = total mass flow rate of phosphorus-bearing feed, metric ton/hr (ton/hr). Rp = P2O5 content...

  16. 40 CFR 63.626 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... affected facility. P = equivalent P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g... P2O5 stored, metric tons (tons). K = conversion factor, 1000 mg/g (453,600 mg/lb). (ii) Method 13A or... Where: E = emission rate of total fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi...

  17. 40 CFR 63.626 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... affected facility. P = equivalent P2O5 feed rate, metric ton/hr (ton/hr). K = conversion factor, 1000 mg/g... P2O5 stored, metric tons (tons). K = conversion factor, 1000 mg/g (453,600 mg/lb). (ii) Method 13A or... Where: E = emission rate of total fluorides, g/metric ton (lb/ton) of equivalent P2O5 feed. Csi...

  18. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  19. Which Species Are We Researching and Why? A Case Study of the Ecology of British Breeding Birds.

    PubMed

    McKenzie, Ailsa J; Robertson, Peter A

    2015-01-01

    Our ecological knowledge base is extensive, but the motivations for research are many and varied, leading to unequal species representation and coverage. As this evidence is used to support a wide range of conservation, management and policy actions, it is important that gaps and biases are identified and understood. In this paper we detail a method for quantifying research effort and impact at the individual species level, and go on to investigate the factors that best explain between-species differences in outputs. We do this using British breeding birds as a case study, producing a ranked list of species based on two scientific publication metrics: total number of papers (a measure of research quantity) and h-index (a measure of the number of highly cited papers on a topic--an indication of research quality). Widespread, populous species which are native, resident and in receipt of biodiversity action plans produced significantly higher publication metrics. Guild was also significant, birds of prey the most studied group, with pigeons and doves the least studied. The model outputs for both metrics were very similar, suggesting that, at least in this example, research quantity and quality were highly correlated. The results highlight three key gaps in the evidence base, with fewer citations and publications relating to migrant breeders, introduced species and species which have experienced contractions in distribution. We suggest that the use of publication metrics in this way provides a novel approach to understanding the scale and drivers of both research quantity and impact at a species level and could be widely applied, both taxonomically and geographically.

  20. Which Species Are We Researching and Why? A Case Study of the Ecology of British Breeding Birds

    PubMed Central

    McKenzie, Ailsa J.; Robertson, Peter A.

    2015-01-01

    Our ecological knowledge base is extensive, but the motivations for research are many and varied, leading to unequal species representation and coverage. As this evidence is used to support a wide range of conservation, management and policy actions, it is important that gaps and biases are identified and understood. In this paper we detail a method for quantifying research effort and impact at the individual species level, and go on to investigate the factors that best explain between-species differences in outputs. We do this using British breeding birds as a case study, producing a ranked list of species based on two scientific publication metrics: total number of papers (a measure of research quantity) and h-index (a measure of the number of highly cited papers on a topic – an indication of research quality). Widespread, populous species which are native, resident and in receipt of biodiversity action plans produced significantly higher publication metrics. Guild was also significant, birds of prey the most studied group, with pigeons and doves the least studied. The model outputs for both metrics were very similar, suggesting that, at least in this example, research quantity and quality were highly correlated. The results highlight three key gaps in the evidence base, with fewer citations and publications relating to migrant breeders, introduced species and species which have experienced contractions in distribution. We suggest that the use of publication metrics in this way provides a novel approach to understanding the scale and drivers of both research quantity and impact at a species level and could be widely applied, both taxonomically and geographically. PMID:26154759

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caillet, V; Colvill, E; Royal North Shore Hospital, Sydney, NSW

    Purpose: The objective of this study was to investigate the dosimetric benefits of multi-leaf collimator (MLC) tracking for lung SABR treatments in end-to-end clinically realistic planning and delivery scenarios. Methods: The clinical benefits of MLC tracking were assessed using previously delivered treatment plans and physical experiments. The 10 most recent single lesion lung SABR patients were re-planned following a 4D-GTV-based real-time adaptive protocol (PTV defined as the end-of-exhalation GTV plus 5.0 mm margins). The plans were delivered on a Trilogy Varian linac. Electromagnetic transponders (Calypso, Varian Medical Systems, USA) were embedded into a programmable moving phantom (HexaMotion platform) tracked withmore » the Varian Calypso system. For each physical experiment, the MLC positions were collected and used as input for dose reconstruction. For both planned and physical experiments, the OAR dose metrics from the conventional and real-time adaptive SABR plans (Mean Lung Dose (MLD), V20 for lung, and near-maximum dose (D2%) for spine and heart) were statistically compared. The Wilcoxon test was used to compare plan and physical experiment dose metrics. Results: While maintaining target coverage, percentage reductions in dose metrics to the OARs were observed for both planned and physical experiments. Comparing the two plans showed MLD percentage reduction (MLDr) of 25.4% (absolute differences of 1.41 Gy) and 28.9% (1.29%) for the V20r. D2% percentage reduction for spine and heart were respectively 27.9% (0.3 Gy) and 20.2% (0.3 Gy). For the physical experiments, MLDr was 23.9% (1.3 Gy), and V20r 37.4% (1.6%). D2% reduction for spine and heart were respectively 27.3% (0.3 Gy) and 19.6% (0.3 Gy). For both plans and physical experiments, significant OAR dose differences (p<0.05) were found between the conventional SABR and real-time adaptive plans. Conclusion: Application of MLC tracking for lung SABR patients has the potential to reduce the dose to OARs during radiation therapy.« less

  2. SU-F-J-123: CT-Based Determination of DIBH Variability and Its Dosimetric Impact On Post-Mastectomy Plus Regional Nodal Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malin, M; Kang, H; Tatebe, K

    Purpose: Breast cancer radiotherapy delivered using voluntary deep inspiration breath-hold (DIBH) requires reproducible breath holds, particularly when matching supraclavicular fields to tangential fields. We studied the impact of variation in DIBHs on CTV and OAR dose metrics by comparing the dose distribution computed on two DIBH CT scans taken at the time of simulation. Methods: Ten patients receiving 50Gy in 25 fractions to the left chestwall and regional lymph nodes were studied. Two simulation CT scans were taken during separate DIBHs along with a free-breathing (FB) scan. The treatment was planned using one DIBH CT. The dose was recomputed onmore » the other two scans using adaptive planning (Pinnacle 9.10) in which the scans are registered using a cross-correlation algorithm. The chestwall, lymph nodes and OARs were contoured on the scans following the RTOG consensus guidelines. The overall translational and rotational variation between the DIBH scans was used to estimate positional variation between breath-holds. Dose metrics between plans were compared using paired t-tests (p < 0.05) and means and standard deviations were reported. Results: The registration parameters were sub-millimeter and sub-degree. Although DIBH significantly reduced mean heart dose by 2.4Gy compared to FB (p < 0.01), no significant changes in dose were observed for targets or OARs between the two DIBH scans. Nodal coverage as assessed by V90% was 90%±8% and 89%±8% for supraclavicular and 99%±2% and 97%±22% for IM nodes. Though a significant decrease (10.5%±12.4%) in lung volume in the second DIBH CT was observed, the lung V20Gy was unchanged (14±2% and 14±3%) between the two DIBH scans. Conclusion: While the lung volume often varied between DIBHs, the CTV and OAR dose metrics were largely unchanged. This indicates that manual DIBH has the potential to provide consistent dose delivery to the chestwall and regional nodes targets when using matched fields.« less

  3. STU black holes and SgrAstar

    NASA Astrophysics Data System (ADS)

    Cvetič, M.; Gibbons, G. W.; Pope, C. N.

    2017-08-01

    The equations of null geodesics in the STU family of rotating black hole solutions of supergravity theory, which may be considered as deformations of the vacuum Kerr metric, are completely integrable. We propose that they be used as a foil to test, for example, with what precision the gravitational field external to the black hole at the centre of our galaxy is given by the Kerr metric. By contrast with some metrics proposed in the literature, the STU metrics satisfy by construction the dominant and strong energy conditions. Our considerations may be extended to include the effects of a cosmological term. We show that these metrics permit a straightforward calculation of the properties of black hole shadows.

  4. Seabird nest counts: A test of monitoring metrics using Red-tailed Tropicbirds

    USGS Publications Warehouse

    Seavy, N.E.; Reynolds, M.H.

    2009-01-01

    Counts of nesting birds are often used to monitor the abundance of breeding pairs at colonies. Mean incubation counts (MICs) are counts of nests with eggs at intervals that correspond to the mean incubation period of a species. The sum of all counts during the nesting season (MICtotal) and the highest single count during the season (MICmax) are metrics that can be generated from this method. However, the utility of these metrics as measures of the number of breeding pairs has not been well tested. We used two approaches to evaluate the bias and precision of MIC metrics for quantifying annual variation in the number of breeding Red-tailed Tropicbirds (Phaethon rubricauda) nesting on two islands in the Papahnaumokukea Marine National Monument in the northwest Hawaiian Islands. First, we used data from nest plots with individually marked birds to generate simulated MIC metrics that we compared to the known number of nesting individuals. The MICtotal overestimated the number of pairs by about 5%, whereas the MICmax underestimated the number of pairs by about 60%. However, both metrics exhibited similar precision. Second, we used a 12-yr time series of island-wide MICs to compare estimates of temporal trend and annual variation using the MICmax and MICtotal. The 95% confidence intervals for the trend estimates were overlapping and the residual standard errors for the two metrics were similar. Our results suggest that both metrics offered similar precision for indices of breeding pairs of Red-tailed Tropicbirds, but that MICtotal was more accurate. ?? 2009 Association of Field Ornithologists.

  5. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  6. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  7. RNA-SeQC: RNA-seq metrics for quality control and process optimization.

    PubMed

    DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad

    2012-06-01

    RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.

  8. Sensitivity of selected landscape pattern metrics to land-cover misclassification and differences in land-cover composition

    Treesearch

    James D. Wickham; Robert V. O' Neill; Kurt H. Riitters; Timothy G. Wade; K. Bruce Jones

    1997-01-01

    Calculation of landscape metrics from land-cover data is becoming increasingly common. Some studies have shown that these measurements are sensitive to differences in land-cover composition, but none are known to have tested also their a sensitivity to land-cover misclassification. An error simulation model was written to test the sensitivity of selected land-scape...

  9. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  10. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  11. Coronally advanced flap with and without a xenogenic collagen matrix in the treatment of multiple recessions: a randomized controlled clinical study.

    PubMed

    Cardaropoli, Daniele; Tamagnone, Lorenzo; Roffredo, Alessandro; Gaveglio, Lorena

    2014-01-01

    Multiple adjacent recession defects were treated in 32 patients using a coronally advanced flap (CAF) with or without a collagen matrix (CM). The percentage of root coverage was 81.49% ± 23.45% (58% complete root coverage) for CAF sites (control) and 93.25% ± 10.01% root coverage (72% complete root coverage) for CM plus CAF sites (test). The results achieved in the test group were significantly greater than in the control group, indicating that CM plus CAF is a suitable option for the treatment of multiple adjacent gingival recessions.

  12. Determinants of Network News Coverage of the Oil Industry during the Late 1970s.

    ERIC Educational Resources Information Center

    Erfle, Stephen; McMillan, Henry

    1989-01-01

    Examines which firms and products best predict media coverage of the oil industry. Reports that price variations in testing oil and gasoline correlate with the extent of news coverage provided by network television. (MM)

  13. Numerical distance effect size is a poor metric of approximate number system acuity.

    PubMed

    Chesney, Dana

    2018-04-12

    Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.

  14. Prevention for those who can pay: insurance reimbursement of genetic-based preventive interventions in the liminal state between health and disease

    PubMed Central

    Prince, Anya E.R.

    2015-01-01

    Clinical use of genetic testing to predict adult onset conditions allows individuals to minimize or circumvent disease when preventive medical interventions are available. Recent policy recommendations and changes expand patient access to information about asymptomatic genetic conditions and create mechanisms for expanded insurance coverage for genetic tests. The American College of Medical Genetics and Genomics (ACMG) recommends that laboratories provide incidental findings of medically actionable genetic variants after whole genome sequencing. The Patient Protection and Affordable Care Act (ACA) established mechanisms to mandate coverage for genetic tests, such as BRCA. The ACA and ACMG, however, do not address insurance coverage for preventive interventions. These policies equate access to testing as access to prevention, without exploring the accessibility and affordability of interventions. In reality, insurance coverage for preventive interventions in asymptomatic adults is variable given the US health insurance system's focus on treatment. Health disparities will be exacerbated if only privileged segments of society can access preventive interventions, such as prophylactic surgeries, screenings, or medication. To ensure equitable access to interventions, federal or state legislatures should mandate insurance coverage for both predictive genetic testing and recommended follow-up interventions included in a list established by an expert panel or regulatory body. PMID:26339500

  15. Prevention for those who can pay: insurance reimbursement of genetic-based preventive interventions in the liminal state between health and disease.

    PubMed

    Prince, Anya E R

    2015-07-01

    Clinical use of genetic testing to predict adult onset conditions allows individuals to minimize or circumvent disease when preventive medical interventions are available. Recent policy recommendations and changes expand patient access to information about asymptomatic genetic conditions and create mechanisms for expanded insurance coverage for genetic tests. The American College of Medical Genetics and Genomics (ACMG) recommends that laboratories provide incidental findings of medically actionable genetic variants after whole genome sequencing. The Patient Protection and Affordable Care Act (ACA) established mechanisms to mandate coverage for genetic tests, such as BRCA. The ACA and ACMG, however, do not address insurance coverage for preventive interventions. These policies equate access to testing as access to prevention, without exploring the accessibility and affordability of interventions. In reality, insurance coverage for preventive interventions in asymptomatic adults is variable given the US health insurance system's focus on treatment. Health disparities will be exacerbated if only privileged segments of society can access preventive interventions, such as prophylactic surgeries, screenings, or medication. To ensure equitable access to interventions, federal or state legislatures should mandate insurance coverage for both predictive genetic testing and recommended follow-up interventions included in a list established by an expert panel or regulatory body.

  16. Leaflets and continual educational offerings led to increased coverage rate of newborn hearing screening in Akita.

    PubMed

    Sato, Teruyuki; Nakazawa, Misao; Takahashi, Shin; Mizuno, Tomomi; Sato, Akira; Noguchi, Atsuko; Sato, Megumi; Katagiri, Sadako; Yamada, Takechiyo

    2018-08-01

    Newborn hearing screening (NHS) has been actively performed in Japan since 2001. The NHS coverage rate has increased each year in Akita Prefecture. We analyzed the details of the NHS program and how the Akita leaflets and the many educational offerings about the importance of NHS led to the high NHS coverage rate. A retrospective study was conducted in liveborn newborns in hospitals and in clinics where hearing screening was performed from the program's beginning in 2001 through the end of 2015. We describe the chronological history of NHS. The outcome data of NHS were collected from our department and analyzed. From the founding of the program in 2001 to 2015, the live birth rate in Akita continually declined. Nevertheless, the number of infants receiving NHS rose each year. Since 2012, the coverage rate of NHS has been over 90%. From 2001 to 2015, 75,331 newborns constituted the eligible population for the NHS program. Since 2012, the number of NHS tests has stabilized. We prepared educational leaflets for Akita Prefecture early in 2002. We also provided many educational classes about the importance of NHS for not only pregnant women but also professionals including obstetricians and gynecologists, pediatricians and municipal staff members. The NHS program received the complete endorsement of the Akita Association of Obstetricians and Gynecologists in 2010. The largest increase in the NHS coverage rate occurred from 2001 to 2002, and the second largest increase occurred from 2009 to 2010. The number of participating institutions increased the coverage rate. The coverage rate is strongly correlated with the number of participating institutions (rs=0.843, p<0.001, Spearman's rank correlation coefficient). Comparing the coverage rate for 5 years before and after the Akita Association of Obstetricians and Gynecologists reached their consensus on the importance of NHS, the coverage rate after 2010 was significantly higher than before 2010 (p<0.001, paired sample t-test). The NHS coverage rate ultimately reached 95.4% without need for legislation or subsidization. The number of participating institutions increased each year, and the number of NHS tests and the coverage rate increased proportionately. The number of participating institutions statistically has a strong correlation with the number of NHS tests and the coverage rate. Our research indicates that the Akita leaflets and the provision of educational sessions about the importance of NHS were the most significant factors in establishing the high NHS coverage rate. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Coverage of neonatal screening: failure of coverage or failure of information system

    PubMed Central

    Ades, A; Walker, J; Jones, R; Smith, I

    2001-01-01

    OBJECTIVES—To evaluate neonatal screening coverage using data routinely collected on the laboratory computer.
SUBJECTS—90 850 births in 14 North East Thames community provider districts over a 21 month period.
METHODS—Births notified to local child health computers are electronically copied to the neonatal laboratory computer system, and incoming Guthrie cards are matched against these birth records before testing. The computer records for the study period were processed to estimate the coverage of the screening programme.
RESULTS—Out of an estimated 90 850 births notified to child health computers, all but 746 (0.82%) appeared to have been screened or could be otherwise accounted for (0.14% in non-metropolitan districts, 0.39% in suburban districts, and 1.68% in inner city districts). A further 893 resident infants had been tested, but could not be matched to the list of notified resident births. The calculated programme coverage already exceeds the 99.5% National Audit Programme standard in 7/14 districts. Elsewhere it is not clear whether it is coverage or recording of coverage that is low.
CONCLUSION—Previous reports of low coverage may have been exaggerated. High coverage can be shown using routine information systems. Design of information systems that deliver accurate measures of coverage would be more useful than comparison of inadequately measured coverage with a national standard. The new NHS number project will create an opportunity to achieve this.
 PMID:11369561

  18. Cost-effectiveness of increasing cervical cancer screening coverage in the Middle East: An example from Lebanon.

    PubMed

    Sharma, Monisha; Seoud, Muhieddine; Kim, Jane J

    2017-01-23

    Most cervical cancer (CC) cases in Lebanon are detected at later stages and associated with high mortality. There is no national organized CC screening program so screening is opportunistic and limited to women who can pay out-of-pocket. Therefore, a small percentage of women receive repeated screenings while most are under-or never screened. We evaluated the cost-effectiveness of increasing screening coverage and extending intervals. We used an individual-based Monte Carlo model simulating HPV and CC natural history and screening. We calibrated the model to epidemiological data from Lebanon, including CC incidence and HPV type distribution. We evaluated cytology and HPV DNA screening for women aged 25-65years, varying coverage from 20 to 70% and frequency from 1 to 5years. At 20% coverage, annual cytologic screening reduced lifetime CC risk by 14% and had an incremental cost-effectiveness ratio of I$80,670/year of life saved (YLS), far exceeding Lebanon's gross domestic product (GDP) per capita (I$17,460), a commonly cited cost-effectiveness threshold. By comparison, increasing cytologic screening coverage to 50% and extending screening intervals to 3 and 5years provided greater CC reduction (26.1% and 21.4, respectively) at lower costs compared to 20% coverage with annual screening. Screening every 5years with HPV DNA testing at 50% coverage provided greater CC reductions than cytology at the same frequency (23.4%) and was cost-effective assuming a cost of I$18 per HPV test administered (I$12,210/YLS); HPV DNA testing every 4years at 50% coverage was also cost-effective at the same cost per test (I$16,340). Increasing coverage of annual cytology was not found to be cost-effective. Current practice of repeated cytology in a small percentage of women is inefficient. Increasing coverage to 50% with extended screening intervals provides greater health benefits at a reasonable cost and can more equitably distribute health gains. Novel HPV DNA strategies offer greater CC reductions and may be more cost-effective than cytology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. STU black holes and SgrA{sup *}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cvetič, M.; Gibbons, G.W.; Pope, C.N., E-mail: cvetic@physics.upenn.edu, E-mail: gwg1@cam.ac.uk, E-mail: pope@physics.tamu.edu

    The equations of null geodesics in the STU family of rotating black hole solutions of supergravity theory, which may be considered as deformations of the vacuum Kerr metric, are completely integrable. We propose that they be used as a foil to test, for example, with what precision the gravitational field external to the black hole at the centre of our galaxy is given by the Kerr metric. By contrast with some metrics proposed in the literature, the STU metrics satisfy by construction the dominant and strong energy conditions. Our considerations may be extended to include the effects of a cosmologicalmore » term. We show that these metrics permit a straightforward calculation of the properties of black hole shadows.« less

  20. 12 CFR Appendix B to Part 1003 - Form and Instructions for Data Collection on Ethnicity, Race, and Sex

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the test for coverage under HMDA in year 1, and then ceases to meet the test (for example, because its... 3. Similarly, an institution that did not meet the coverage test for a given year, and then meets the test in the succeeding year, begins collecting HMDA data in the calendar year following the year...

  1. An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.

    PubMed

    Yoon, Yourim; Kim, Yong-Hyuk

    2013-10-01

    Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

  2. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    PubMed

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  3. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  4. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  5. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  6. Applications of Logic Coverage Criteria and Logic Mutation to Software Testing

    ERIC Educational Resources Information Center

    Kaminski, Garrett K.

    2011-01-01

    Logic is an important component of software. Thus, software logic testing has enjoyed significant research over a period of decades, with renewed interest in the last several years. One approach to detecting logic faults is to create and execute tests that satisfy logic coverage criteria. Another approach to detecting faults is to perform mutation…

  7. Important LiDAR metrics for discriminating forest tree species in Central Europe

    NASA Astrophysics Data System (ADS)

    Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco

    2018-03-01

    Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.

  8. Test of the FLRW Metric and Curvature with Strong Lens Time Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Kai; Li, Zhengxiang; Wang, Guo-Jian

    We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the formermore » test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.« less

  9. Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.

    PubMed

    Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong

    2017-05-18

    In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.

  10. Study of the Ernst metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esteban, E.P.

    In this thesis some properties of the Ernst metric are studied. This metric could provide a model for a Schwarzschild black hole immersed in a magnetic field. In chapter I, some standard propertiess of the Ernst's metric such as the affine connections, the Riemann, the Ricci, and the Weyl conformal tensor are calculated. In chapter II, the geodesics described by test particles in the Ernst space-time are studied. As an application a formula for the perihelion shift is derived. In the last chapter a null tetrad analysis of the Ernst metric is carried out and the resulting formalism applied tomore » the study of three problems. First, the algebraic classification of the Ernst metric is determined to be of type I in the Petrov scheme. Secondly, an explicit formula for the Gaussian curvature for the event horizon is derived. Finally, the form of the electromagnetic field is evaluated.« less

  11. Cross-cultural differences in meter perception.

    PubMed

    Kalender, Beste; Trehub, Sandra E; Schellenberg, E Glenn

    2013-03-01

    We examined the influence of incidental exposure to varied metrical patterns from different musical cultures on the perception of complex metrical structures from an unfamiliar musical culture. Adults who were familiar with Western music only (i.e., simple meters) and those who also had limited familiarity with non-Western music were tested on their perception of metrical organization in unfamiliar (Turkish) music with simple and complex meters. Adults who were familiar with Western music detected meter-violating changes in Turkish music with simple meter but not in Turkish music with complex meter. Adults with some exposure to non-Western music that was unmetered or metrically complex detected meter-violating changes in Turkish music with both simple and complex meters, but they performed better on patterns with a simple meter. The implication is that familiarity with varied metrical structures, including those with a non-isochronous tactus, enhances sensitivity to the metrical organization of unfamiliar music.

  12. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  13. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  14. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  15. The Effect of Lexical Coverage and Dictionary Use on L2 Reading Comprehension

    ERIC Educational Resources Information Center

    Prichard, Caleb; Matsumoto, Yuko

    2011-01-01

    This study aims to further understand the role of lexical coverage on L2 reading comprehension. It examines test scores of learners at or near the 90-95% coverage level to determine if this coverage range allows for comprehension of authentic texts. The findings suggest that 92-93% may be a threshold mark at which understanding of a text…

  16. Maize flour fortification in Africa: markets, feasibility, coverage, and costs.

    PubMed

    Fiedler, John L; Afidra, Ronald; Mugambi, Gladys; Tehinse, John; Kabaghe, Gladys; Zulu, Rodah; Lividini, Keith; Smitz, Marc-Francois; Jallier, Vincent; Guyondet, Christophe; Bermudez, Odilia

    2014-04-01

    The economic feasibility of maize flour and maize meal fortification in Kenya, Uganda, and Zambia is assessed using information about the maize milling industry, households' purchases and consumption levels of maize flour, and the incremental cost and estimated price impacts of fortification. Premix costs comprise the overwhelming share of incremental fortification costs and vary by 50% in Kenya and by more than 100% across the three countries. The estimated incremental cost of maize flour fortification per metric ton varies from $3.19 in Zambia to $4.41 in Uganda. Assuming all incremental costs are passed onto the consumer, fortification in Zambia would result in at most a 0.9% increase in the price of maize flour, and would increase annual outlays of the average maize flour-consuming household by 0.2%. The increases for Kenyans and Ugandans would be even less. Although the coverage of maize flour fortification is not likely to be as high as some advocates have predicted, fortification is economically feasible, and would reduce deficiencies of multiple micronutrients, which are significant public health problems in each of these countries. © 2013 New York Academy of Sciences.

  17. Search strategy in a complex and dynamic environment (the Indian Ocean case)

    NASA Astrophysics Data System (ADS)

    Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team

    2014-11-01

    The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.

  18. Marketplace Plans Provide Risk Protection, But Actuarial Values Overstate Realized Coverage For Most Enrollees.

    PubMed

    Polyakova, Maria; Hua, Lynn Mei; Bundorf, M Kate

    2017-12-01

    The Affordable Care Act (ACA) has increased the number of Americans with health insurance. Yet many policy makers and consumers have questioned the value of Marketplace plan coverage because of the generally high levels of cost sharing. We simulated out-of-pocket spending for bronze, silver, or gold Marketplace plans (those having actuarial values of 60 percent, 70 percent, and 80 percent, respectively). We found that for the vast majority of consumers, the proportion of covered spending paid by the plans is likely to be far less than their actuarial values, the metric commonly used to convey plan generosity. Indeed, only when annual health care spending exceeds $16,500 for bronze plans, $19,500 for silver plans, and $21,500 for gold plans do plans in these metal tiers cover the proportion of costs matching their actuarial values. While Marketplace plans substantially reduce consumers' exposure to financial risk relative to being uninsured, the use of actuarial values to communicate plan generosity is likely to be misleading to consumers.

  19. Evolving provider payment models and patient access to innovative medical technology.

    PubMed

    Long, Genia; Mortimer, Richard; Sanzenbacher, Geoffrey

    2014-12-01

    Abstract Objective: To investigate the evolving use and expected impact of pay-for-performance (P4P) and risk-based provider reimbursement on patient access to innovative medical technology. Structured interviews with leading private payers representing over 110 million commercially-insured lives exploring current and planned use of P4P provider payment models, evidence requirements for technology assessment and new technology coverage, and the evolving relationship between the two topics. Respondents reported rapid increases in the use of P4P and risk-sharing programs, with roughly half of commercial lives affected 3 years ago, just under two-thirds today, and an expected three-quarters in 3 years. All reported well-established systems for evaluating new technology coverage. Five of nine reported becoming more selective in the past 3 years in approving new technologies; four anticipated that in the next 3 years there will be a higher evidence requirement for new technology access. Similarly, four expected it will become more difficult for clinically appropriate but costly technologies to gain coverage. All reported planning to rely more on these types of provider payment incentives to control costs, but didn't see them as a substitute for payer technology reviews and coverage limitations; they each have a role to play. Interviews limited to nine leading payers with models in place; self-reported data. Likely implications include a more uncertain payment environment for providers, and indirectly for innovative medical technology and future investment, greater reliance on quality and financial metrics, and increased evidence requirements for favorable coverage and utilization decisions. Increasing provider financial risk may challenge the traditional technology adoption paradigm, where payers assumed a 'gatekeeping' role and providers a countervailing patient advocacy role with regard to access to new technology. Increased provider financial risk may result in an additional hurdle to the adoption of new technology, rather than substitution of provider- for payer-based gatekeeping.

  20. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Mark; Tuen Mun Hospital, Hong Kong; Grehn, Melanie

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with themore » original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.« less

  1. Evaluation of BLAST-based edge-weighting metrics used for homology inference with the Markov Clustering algorithm.

    PubMed

    Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F

    2015-07-10

    Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.

  2. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    PubMed

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. Geographical Inequalities in Use of Improved Drinking Water Supply and Sanitation across Sub-Saharan Africa: Mapping and Spatial Analysis of Cross-sectional Survey Data

    PubMed Central

    Pullan, Rachel L.; Freeman, Matthew C.; Gething, Peter W.; Brooker, Simon J.

    2014-01-01

    Background Understanding geographic inequalities in coverage of drinking-water supply and sanitation (WSS) will help track progress towards universal coverage of water and sanitation by identifying marginalized populations, thus helping to control a large number of infectious diseases. This paper uses household survey data to develop comprehensive maps of WSS coverage at high spatial resolution for sub-Saharan Africa (SSA). Analysis is extended to investigate geographic heterogeneity and relative geographic inequality within countries. Methods and Findings Cluster-level data on household reported use of improved drinking-water supply, sanitation, and open defecation were abstracted from 138 national surveys undertaken from 1991–2012 in 41 countries. Spatially explicit logistic regression models were developed and fitted within a Bayesian framework, and used to predict coverage at the second administrative level (admin2, e.g., district) across SSA for 2012. Results reveal substantial geographical inequalities in predicted use of water and sanitation that exceed urban-rural disparities. The average range in coverage seen between admin2 within countries was 55% for improved drinking water, 54% for use of improved sanitation, and 59% for dependence upon open defecation. There was also some evidence that countries with higher levels of inequality relative to coverage in use of an improved drinking-water source also experienced higher levels of inequality in use of improved sanitation (rural populations r = 0.47, p = 0.002; urban populations r = 0.39, p = 0.01). Results are limited by the quantity of WSS data available, which varies considerably by country, and by the reliability and utility of available indicators. Conclusions This study identifies important geographic inequalities in use of WSS previously hidden within national statistics, confirming the necessity for targeted policies and metrics that reach the most marginalized populations. The presented maps and analysis approach can provide a mechanism for monitoring future reductions in inequality within countries, reflecting priorities of the post-2015 development agenda. Please see later in the article for the Editors' Summary PMID:24714528

  4. State of inequality in malaria intervention coverage in sub-Saharan African countries.

    PubMed

    Galactionova, Katya; Smith, Thomas A; de Savigny, Don; Penny, Melissa A

    2017-10-18

    Scale-up of malaria interventions over the last decade have yielded a significant reduction in malaria transmission and disease burden in sub-Saharan Africa. We estimated economic gradients in the distribution of these efforts and of their impacts within and across endemic countries. Using Demographic and Health Surveys we computed equity metrics to characterize the distribution of malaria interventions in 30 endemic countries proxying economic position with an asset-wealth index. Gradients were summarized in a concentration index, tabulated against level of coverage, and compared among interventions, across countries, and against respective trends over the period 2005-2015. There remain broad differences in coverage of malaria interventions and their distribution by wealth within and across countries. In most, economic gradients are lacking or favor the poorest for vector control; malaria services delivered through the formal healthcare sector are much less equitable. Scale-up of interventions in many countries improved access across the wealth continuum; in some, these efforts consistently prioritized the poorest. Expansions in control programs generally narrowed coverage gaps between economic strata; gradients persist in countries where growth was slower in the poorest quintile or where baseline inequality was large. Despite progress, malaria is consistently concentrated in the poorest, with the degree of inequality in burden far surpassing that expected given gradients in the distribution of interventions. Economic gradients in the distribution of interventions persist over time, limiting progress toward equity in malaria control. We found that, in countries with large baseline inequality in the distribution of interventions, even a small bias in expansion favoring the least poor yielded large gradients in intervention coverage while pro-poor growth failed to close the gap between the poorest and least poor. We demonstrated that dimensions of disadvantage compound for the poor; a lack of economic gradients in the distribution of malaria services does not translate to equity in coverage nor can it be interpreted to imply equity in distribution of risk or disease burden. Our analysis testifies to the progress made by countries in narrowing economic gradients in malaria interventions and highlights the scope for continued monitoring of programs with respect to equity.

  5. Geographical inequalities in use of improved drinking water supply and sanitation across Sub-Saharan Africa: mapping and spatial analysis of cross-sectional survey data.

    PubMed

    Pullan, Rachel L; Freeman, Matthew C; Gething, Peter W; Brooker, Simon J

    2014-04-01

    Understanding geographic inequalities in coverage of drinking-water supply and sanitation (WSS) will help track progress towards universal coverage of water and sanitation by identifying marginalized populations, thus helping to control a large number of infectious diseases. This paper uses household survey data to develop comprehensive maps of WSS coverage at high spatial resolution for sub-Saharan Africa (SSA). Analysis is extended to investigate geographic heterogeneity and relative geographic inequality within countries. Cluster-level data on household reported use of improved drinking-water supply, sanitation, and open defecation were abstracted from 138 national surveys undertaken from 1991-2012 in 41 countries. Spatially explicit logistic regression models were developed and fitted within a Bayesian framework, and used to predict coverage at the second administrative level (admin2, e.g., district) across SSA for 2012. Results reveal substantial geographical inequalities in predicted use of water and sanitation that exceed urban-rural disparities. The average range in coverage seen between admin2 within countries was 55% for improved drinking water, 54% for use of improved sanitation, and 59% for dependence upon open defecation. There was also some evidence that countries with higher levels of inequality relative to coverage in use of an improved drinking-water source also experienced higher levels of inequality in use of improved sanitation (rural populations r = 0.47, p = 0.002; urban populations r = 0.39, p = 0.01). Results are limited by the quantity of WSS data available, which varies considerably by country, and by the reliability and utility of available indicators. This study identifies important geographic inequalities in use of WSS previously hidden within national statistics, confirming the necessity for targeted policies and metrics that reach the most marginalized populations. The presented maps and analysis approach can provide a mechanism for monitoring future reductions in inequality within countries, reflecting priorities of the post-2015 development agenda. Please see later in the article for the Editors' Summary.

  6. Estimating the impact of test-and-treat strategies on hepatitis B virus infection in China by using an age-structured mathematical model.

    PubMed

    Zu, Jian; Li, Miaolei; Zhuang, Guihua; Liang, Peifeng; Cui, Fuqiang; Wang, Fuzhen; Zheng, Hui; Liang, Xiaofeng

    2018-04-01

    The potential impact of increasing test-and-treat coverage on hepatitis B virus (HBV) infection remains unclear in China. The objective of this study was to develop a dynamic compartmental model at a population level to estimate the long-term effect of this strategy.Based on the natural history of HBV infection and 3 serosurvey data of hepatitis B in China, we proposed an age- and time-dependent discrete model to predict the number of new HBV infection, the number of chronic HBV infection, and the number of HBV-related deaths for the time from 2018 to 2050 under 5 different test-and-treat coverage and compared them with current intervention policy.Compared with current policy, if the test-and-treat coverage was increased to 100% since 2018, the numbers of chronic HBV infection, new HBV infection, and HBV-related deaths in 2035 would be reduced by 26.60%, 24.88%, 26.55%, respectively, and in 2050 it would be reduced by 44.93%, 43.29%, 43.67%, respectively. In contrast, if the test-and-treat coverage was increased by 10% every year since 2018, then the numbers of chronic HBV infection, new HBV infection, and HBV-related deaths in 2035 would be reduced by 21.81%, 20.10%, 21.40%, respectively, and in 2050 it would be reduced by 41.53%, 39.89%, 40.32%, respectively. In particular, if the test-and-treat coverage was increased to 75% since 2018, then the annual number of HBV-related deaths would begin to decrease from 2018. If the test-and-treat coverage was increased to above 25% since 2018, then the hepatitis B surface antigen (HBsAg) prevalence for population aged 1 to 59 years in China would be reduced to below 2% in 2035. Our model also showed that in 2035, the numbers of chronic HBV infection and HBV-related deaths in 65 to 69 age group would be reduced the most (about 1.6 million and 13 thousand, respectively).Increasing test-and-treat coverage would significantly reduce HBV infection in China, especially in the middle-aged people and older people. The earlier the treatment and the longer the time, the more significant the reduction. Implementation of test-and-treat strategy is highly effective in controlling hepatitis B in China.

  7. The potential impact and cost of focusing HIV prevention on young women and men: A modeling analysis in western Kenya.

    PubMed

    Alsallaq, Ramzi A; Buttolph, Jasmine; Cleland, Charles M; Hallett, Timothy; Inwani, Irene; Agot, Kawango; Kurth, Ann E

    2017-01-01

    We compared the impact and costs of HIV prevention strategies focusing on youth (15-24 year-old persons) versus on adults (15+ year-old persons), in a high-HIV burden context of a large generalized epidemic. Compartmental age-structured mathematical model of HIV transmission in Nyanza, Kenya. The interventions focused on youth were high coverage HIV testing (80% of youth), treatment at diagnosis (TasP, i.e., immediate start of antiretroviral therapy [ART]) and 10% increased condom usage for HIV-positive diagnosed youth, male circumcision for HIV-negative young men, pre-exposure prophylaxis (PrEP) for high-risk HIV-negative females (ages 20-24 years), and cash transfer for in-school HIV-negative girls (ages 15-19 years). Permutations of these were compared to adult-focused HIV testing coverage with condoms and TasP. The youth-focused strategy with ART treatment at diagnosis and condom use without adding interventions for HIV-negative youth performed better than the adult-focused strategy with adult testing reaching 50-60% coverage and TasP/condoms. Over the long term, the youth-focused strategy approached the performance of 70% adult testing and TasP/condoms. When high coverage male circumcision also is added to the youth-focused strategy, the combined intervention outperformed the adult-focused strategy with 70% testing, for at least 35 years by averting 94,000 more infections, averting 5.0 million more disability-adjusted life years (DALYs), and saving US$46.0 million over this period. The addition of prevention interventions beyond circumcision to the youth-focused strategy would be more beneficial if HIV care costs are high, or when program delivery costs are relatively high for programs encompassing HIV testing coverage exceeding 70%, TasP and condoms to HIV-infected adults compared to combination prevention programs among youth. For at least the next three decades, focusing in high burden settings on high coverage HIV testing, ART treatment upon diagnosis, condoms and male circumcision among youth may outperform adult-focused ART treatment upon diagnosis programs, unless the adult testing coverage in these programs reaches very high levels (>70% of all adults reached) at similar program costs. Our results indicate the potential importance of age-targeting for HIV prevention in the current era of 'test and start, ending AIDS' goals to ameliorate the HIV epidemic globally.

  8. Estimating the impact of test-and-treat strategies on hepatitis B virus infection in China by using an age-structured mathematical model

    PubMed Central

    Zu, Jian; Li, Miaolei; Zhuang, Guihua; Liang, Peifeng; Cui, Fuqiang; Wang, Fuzhen; Zheng, Hui; Liang, Xiaofeng

    2018-01-01

    Abstract The potential impact of increasing test-and-treat coverage on hepatitis B virus (HBV) infection remains unclear in China. The objective of this study was to develop a dynamic compartmental model at a population level to estimate the long-term effect of this strategy. Based on the natural history of HBV infection and 3 serosurvey data of hepatitis B in China, we proposed an age- and time-dependent discrete model to predict the number of new HBV infection, the number of chronic HBV infection, and the number of HBV-related deaths for the time from 2018 to 2050 under 5 different test-and-treat coverage and compared them with current intervention policy. Compared with current policy, if the test-and-treat coverage was increased to 100% since 2018, the numbers of chronic HBV infection, new HBV infection, and HBV-related deaths in 2035 would be reduced by 26.60%, 24.88%, 26.55%, respectively, and in 2050 it would be reduced by 44.93%, 43.29%, 43.67%, respectively. In contrast, if the test-and-treat coverage was increased by 10% every year since 2018, then the numbers of chronic HBV infection, new HBV infection, and HBV-related deaths in 2035 would be reduced by 21.81%, 20.10%, 21.40%, respectively, and in 2050 it would be reduced by 41.53%, 39.89%, 40.32%, respectively. In particular, if the test-and-treat coverage was increased to 75% since 2018, then the annual number of HBV-related deaths would begin to decrease from 2018. If the test-and-treat coverage was increased to above 25% since 2018, then the hepatitis B surface antigen (HBsAg) prevalence for population aged 1 to 59 years in China would be reduced to below 2% in 2035. Our model also showed that in 2035, the numbers of chronic HBV infection and HBV-related deaths in 65 to 69 age group would be reduced the most (about 1.6 million and 13 thousand, respectively). Increasing test-and-treat coverage would significantly reduce HBV infection in China, especially in the middle-aged people and older people. The earlier the treatment and the longer the time, the more significant the reduction. Implementation of test-and-treat strategy is highly effective in controlling hepatitis B in China. PMID:29668627

  9. Repeatability of quantitative 18F-FLT uptake measurements in solid tumors: an individual patient data multi-center meta-analysis.

    PubMed

    Kramer, G M; Liu, Y; de Langen, A J; Jansma, E P; Trigonis, I; Asselin, M-C; Jackson, A; Kenny, L; Aboagye, E O; Hoekstra, O S; Boellaard, R

    2018-06-01

    3'-deoxy-3'-[ 18 F]fluorothymidine ( 18 F-FLT) positron emission tomography (PET) provides a non-invasive method to assess cellular proliferation and response to antitumor therapy. Quantitative 18 F-FLT uptake metrics are being used for evaluation of proliferative response in investigational setting, however multi-center repeatability needs to be established. The aim of this study was to determine the repeatability of 18 F-FLT tumor uptake metrics by re-analyzing individual patient data from previously published reports using the same tumor segmentation method and repeatability metrics across cohorts. A systematic search in PubMed, EMBASE.com and the Cochrane Library from inception-October 2016 yielded five 18 F-FLT repeatability cohorts in solid tumors. 18 F-FLT avid lesions were delineated using a 50% isocontour adapted for local background on test and retest scans. SUV max , SUV mean , SUV peak , proliferative volume and total lesion uptake (TLU) were calculated. Repeatability was assessed using the repeatability coefficient (RC = 1.96 × SD of test-retest differences), linear regression analysis, and the intra-class correlation coefficient (ICC). The impact of different lesion selection criteria was also evaluated. Images from four cohorts containing 30 patients with 52 lesions were obtained and analyzed (ten in breast cancer, nine in head and neck squamous cell carcinoma, and 33 in non-small cell lung cancer patients). A good correlation was found between test-retest data for all 18 F-FLT uptake metrics (R 2  ≥ 0.93; ICC ≥ 0.96). Best repeatability was found for SUV peak (RC: 23.1%), without significant differences in RC between different SUV metrics. Repeatability of proliferative volume (RC: 36.0%) and TLU (RC: 36.4%) was worse than SUV. Lesion selection methods based on SUV max  ≥ 4.0 improved the repeatability of volumetric metrics (RC: 26-28%), but did not affect the repeatability of SUV metrics. In multi-center studies, differences ≥ 25% in 18 F-FLT SUV metrics likely represent a true change in tumor uptake. Larger differences are required for FLT metrics comprising volume estimates when no lesion selection criteria are applied.

  10. Improving Exposure Science and Dose Metrics for Toxicity Testing, Screening, Prioritizing, and Risk Assessment

    EPA Science Inventory

    Advance the characterization of exposure and dose metrics required to translate advances and findings in computational toxicology to information that can be directly used to support exposure and risk assessment for decision making and improved public health.

  11. Beyond Impervious: Urban Land-Cover Pattern Variation and Implications for Watershed Management

    NASA Astrophysics Data System (ADS)

    Beck, Scott M.; McHale, Melissa R.; Hess, George R.

    2016-07-01

    Impervious surfaces degrade urban water quality, but their over-coverage has not explained the persistent water quality variation observed among catchments with similar rates of imperviousness. Land-cover patterns likely explain much of this variation, although little is known about how they vary among watersheds. Our goal was to analyze a series of urban catchments within a range of impervious cover to evaluate how land-cover varies among them. We then highlight examples from the literature to explore the potential effects of land-cover pattern variability for urban watershed management. High-resolution (1 m2) land-cover data were used to quantify 23 land-cover pattern and stormwater infrastructure metrics within 32 catchments across the Triangle Region of North Carolina. These metrics were used to analyze variability in land-cover patterns among the study catchments. We used hierarchical clustering to organize the catchments into four groups, each with a distinct landscape pattern. Among these groups, the connectivity of combined land-cover patches accounted for 40 %, and the size and shape of lawns and buildings accounted for 20 %, of the overall variation in land-cover patterns among catchments. Storm water infrastructure metrics accounted for 8 % of the remaining variation. Our analysis demonstrates that land-cover patterns do vary among urban catchments, and that trees and grass (lawns) are divergent cover types in urban systems. The complex interactions among land-covers have several direct implications for the ongoing management of urban watersheds.

  12. Examination of ceramic restoration adhesive coverage in cusp-replacement premolar using acoustic emission under fatigue testing.

    PubMed

    Chang, Yen-Hsiang; Yu, Jin-Jie; Lin, Chun-Li

    2014-12-13

    This study investigates CAD/CAM ceramic cusp-replacing restoration resistance with and without buccal cusp replacement under static and dynamic cyclic loads, monitored using the acoustic emission (AE) technique. The cavity was designed in a typical MODP (mesial-occlusal-distal-palatal) restoration failure shape when the palatal cusp has been lost. Two ceramic restorations [without coverage (WOC) and with (WC) buccal cuspal coverage with 2.0 mm reduction in cuspal height] were prepared to perform the fracture and fatigue tests with normal (200 N) and high (600 N) occlusal forces. The load versus AE signals in the fracture and fatigue tests were recorded to evaluate the restored tooth failure resistance. The results showed that non-significant differences in load value in the fracture test and the accumulated number of AE signals under normal occlusal force (200 N) in the fatigue test were found between with and without buccal cuspal coverage restorations. The first AE activity occurring for the WOC restoration was lower than that for the WC restoration in the fracture test. The number of AE signals increased with the cyclic load number. The accumulated number of AE signals for the WOC restoration was 187, higher than that (85) for the WC restoration under 600 N in the fatigue test. The AE technique and fatigue tests employed in this study were used as an assessment tool to evaluate the resistances in large CAD/CAM ceramic restorations. Non-significant differences in the tested fracture loads and accumulated number of AE signals under normal occlusal force (200 N) between different restorations indicated that aggressive treatment (with coverage preparation) in palatal cusp-replacing ceramic premolars require more attention for preserving and protecting the remaining tooth.

  13. Business and Breakthrough: Framing (Expanded) Genetic Carrier Screening for the Public.

    PubMed

    Holton, Avery E; Canary, Heather E; Wong, Bob

    2017-09-01

    A growing body of research has given attention to issues surrounding genetic testing, including expanded carrier screening (ECS), an elective medical test that allows planning or expecting parents to consider the potential occurrence of genetic diseases and disorders in their children. These studies have noted the role of the mass media in driving public perceptions about such testing, giving particular attention to ways in which coverage of genetics and genetic testing broadly may drive public attitudes and choices concerning the morality, legality, ethics, and parental well-being involved in genetic technologies. However, few studies have explored how mass media are covering the newer test, ECS. Drawing on health-related framing studies that have shown in varying degrees the impact particular frames such as gain/loss and thematic/episodic can have on the public, this study examines the frame selection employed by online media in its coverage of ECS. This analysis-combined with an analysis of the sources and topics used in such coverage and how they relate to selected frames-helps to clarify how mass media are covering an increasingly important medical test and offers considerations of how such coverage may inform mass media scholarship as well as health-related practices.

  14. Business and Breakthrough: Framing (Expanded) Genetic Carrier Screening for the Public

    PubMed Central

    Holton, Avery E.; Canary, Heather E.; Wong, Bob

    2018-01-01

    A growing body of research has given attention to issues surrounding genetic testing, including expanded carrier screening (ECS), an elective medical test that allows planning or expecting parents to consider the potential occurrence of genetic diseases and disorders in their children. These studies have noted the role of the mass media in driving public perceptions about such testing, giving particular attention to ways in which coverage of genetics and genetic testing broadly may drive public attitudes and choices concerning the morality, legality, ethics, and parental well-being involved in genetic technologies. However, few studies have explored how mass media are covering the newer test, ECS. Drawing on health-related framing studies that have shown in varying degrees the impact particular frames such as gain/loss and thematic/episodic can have on the public, this study examines the frame selection employed by online media in its coverage of ECS. This analysis—combined with an analysis of the sources and topics used in such coverage and how they relate to selected frames—helps to clarify how mass media are covering an increasingly important medical test and offers considerations of how such coverage may inform mass media scholarship as well as health-related practices. PMID:27483980

  15. Testing the Kerr Black Hole Hypothesis Using X-Ray Reflection Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambi, Cosimo; Nampalliwar, Sourabh; Cárdenas-Avendaño, Alejandro

    We present the first X-ray reflection model for testing the assumption that the metric of astrophysical black holes is described by the Kerr solution. We employ the formalism of the transfer function proposed by Cunningham. The calculations of the reflection spectrum of a thin accretion disk are split into two parts: the calculation of the transfer function and the calculation of the local spectrum at any emission point in the disk. The transfer function only depends on the background metric and takes into account all the relativistic effects (gravitational redshift, Doppler boosting, and light bending). Our code computes the transfermore » function for a spacetime described by the Johannsen metric and can easily be extended to any stationary, axisymmetric, and asymptotically flat spacetime. Transfer functions and single line shapes in the Kerr metric are compared to those calculated from existing codes to check that we reach the necessary accuracy. We also simulate some observations with NuSTAR and LAD/eXTP and fit the data with our new model to show the potential capabilities of current and future observations to constrain possible deviations from the Kerr metric.« less

  16. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  17. Examination of the properties of IMRT and VMAT beams and evaluation against pre-treatment quality assurance results

    NASA Astrophysics Data System (ADS)

    Crowe, S. B.; Kairn, T.; Middlebrook, N.; Sutherland, B.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2015-03-01

    This study aimed to provide a detailed evaluation and comparison of a range of modulated beam evaluation metrics, in terms of their correlation with QA testing results and their variation between treatment sites, for a large number of treatments. Ten metrics including the modulation index (MI), fluence map complexity, modulation complexity score (MCS), mean aperture displacement (MAD) and small aperture score (SAS) were evaluated for 546 beams from 122 intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) treatment plans targeting the anus, rectum, endometrium, brain, head and neck and prostate. The calculated sets of metrics were evaluated in terms of their relationships to each other and their correlation with the results of electronic portal imaging based quality assurance (QA) evaluations of the treatment beams. Evaluation of the MI, MAD and SAS suggested that beams used in treatments of the anus, rectum, head and neck were more complex than the prostate and brain treatment beams. Seven of the ten beam complexity metrics were found to be strongly correlated with the results from QA testing of the IMRT beams (p < 0.00008). For example, values of SAS (with multileaf collimator apertures narrower than 10 mm defined as ‘small’) less than 0.2 also identified QA passing IMRT beams with 100% specificity. However, few of the metrics are correlated with the results from QA testing of the VMAT beams, whether they were evaluated as whole 360° arcs or as 60° sub-arcs. Select evaluation of beam complexity metrics (at least MI, MCS and SAS) is therefore recommended, as an intermediate step in the IMRT QA chain. Such evaluation may also be useful as a means of periodically reviewing VMAT planning or optimiser performance.

  18. A support vector machine for predicting defibrillation outcomes from waveform metrics.

    PubMed

    Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J

    2014-03-01

    Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Quantifying esophagogastric junction contractility with a novel HRM topographic metric, the EGJ-Contractile Integral: normative values and preliminary evaluation in PPI non-responders.

    PubMed

    Nicodème, F; Pipa-Muniz, M; Khanna, K; Kahrilas, P J; Pandolfino, J E

    2014-03-01

    Despite its obvious pathophysiological relevance, the clinical utility of measures of esophagogastric junction (EGJ) contractility is unsubstantiated. High-resolution manometry (HRM) may improve upon this with its inherent ability to integrate the magnitude of contractility over time and length of the EGJ. This study aimed to develop a novel HRM metric summarizing EGJ contractility and test its ability distinguish among subgroups of proton pump inhibitor non-responders (PPI-NRs). 75 normal controls and 88 PPI-NRs were studied. All underwent HRM. PPI-NRs underwent pH-impedance monitoring on PPI therapy scored in terms of acid exposure, number of reflux events, and reflux-symptom correlation and grouped as meeting all criteria, some criteria, or no criteria of abnormality. Control HRM studies were used to establish normal values for candidate EGJ contractility metrics, which were then compared in their ability to differentiate among PPI-NR subgroups. The EGJ contractile integral (EGJ-CI), a metric integrating contractility across the EGJ for three respiratory cycles, best distinguished the All Criteria PPI-NR subgroup from controls and other PPI-NR subgroups. Normal values (median, [IQR]) for this measure were 39 mmHg-cm [25-55 mmHg-cm]. The correlation between the EGJ-CI and a previously proposed metric, the lower esophageal sphincter-pressure integral, that used a fixed 10 s time frame and an atmospheric as opposed to gastric pressure reference was weak. Among HRM metrics tested, the EGJ-CI was best in distinguishing PPI-NRs meeting all criteria of abnormality on pH-impedance testing. Future prospective studies are required to explore its utility in management of broader groups of gastroesophageal reflux disease patients. © 2013 John Wiley & Sons Ltd.

  20. Metrics that differentiate the origins of osmolyte effects on protein stability: a test of the surface tension proposal.

    PubMed

    Auton, Matthew; Ferreon, Allan Chris M; Bolen, D Wayne

    2006-09-01

    Osmolytes that are naturally selected to protect organisms against environmental stresses are known to confer stability to proteins via preferential exclusion from protein surfaces. Solvophobicity, surface tension, excluded volume, water structure changes and electrostatic repulsion are all examples of forces proposed to account for preferential exclusion and the ramifications exclusion has on protein properties. What has been lacking is a systematic way of determining which force(s) is(are) responsible for osmolyte effects. Here, we propose the use of two experimental metrics for assessing the abilities of various proposed forces to account for osmolyte-mediated effects on protein properties. Metric 1 requires prediction of the experimentally determined ability of the osmolyte to bring about folding/unfolding resulting from the application of the force in question (i.e. prediction of the m-value of the protein in osmolyte). Metric 2 requires prediction of the experimentally determined ability of the osmolyte to contract or expand the Stokes radius of the denatured state resulting from the application of the force. These metrics are applied to test separate claims that solvophobicity/solvophilicity and surface tension are driving forces for osmolyte-induced effects on protein stability. The results show clearly that solvophobic/solvophilic forces readily account for protein stability and denatured state dimensional effects, while surface tension alone fails to do so. The agreement between experimental and predicted m-values involves both positive and negative m-values for three different proteins, and as many as six different osmolytes, illustrating that the tests are robust and discriminating. The ability of the two metrics to distinguish which forces account for the effects of osmolytes on protein properties and which do not, provides a powerful means of investigating the origins of osmolyte-protein effects.

  1. Improving Climate Projections Using "Intelligent" Ensembles

    NASA Technical Reports Server (NTRS)

    Baker, Noel C.; Taylor, Patrick C.

    2015-01-01

    Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.

  2. How much energy is locked in the USA? Alternative metrics for characterising the magnitude of overweight and obesity derived from BRFSS 2010 data.

    PubMed

    Reidpath, Daniel D; Masood, Mohd; Allotey, Pascale

    2014-06-01

    Four metrics to characterise population overweight are described. Behavioural Risk Factors Surveillance System data were used to estimate the weight the US population needed to lose to achieve a BMI < 25. The metrics for population level overweight were total weight, total volume, total energy, and energy value. About 144 million people in the US need to lose 2.4 million metric tonnes. The volume of fat is 2.6 billion litres-1,038 Olympic size swimming pools. The energy in the fat would power 90,000 households for a year and is worth around 162 million dollars. Four confronting ways of talking about a national overweight and obesity are described. The value of the metrics remains to be tested.

  3. Lack of insurance coverage for testing supplies is associated with poorer glycemic control in patients with type 2 diabetes

    PubMed Central

    Bowker, Samantha L.; Mitchell, Chad G.; Majumdar, Sumit R.; Toth, Ellen L.; Johnson, Jeffrey A.

    2004-01-01

    Background Public insurance for testing supplies for self-monitoring of blood glucose is highly variable across Canada. We sought to determine if insured patients were more likely than uninsured patients to use self-monitoring and whether they had better glycemic control. Methods We used baseline survey and laboratory data from patients enrolled in a randomized controlled trial examining the effect of paying for testing supplies on glycemic control. We recruited patients through community pharmacies in Alberta and Saskatchewan from Nov. 2001 to June 2003. To avoid concerns regarding differences in provincial coverage of self-monitoring and medications, we report the analysis of Alberta patients only. Results Among our sample of 405 patients, 41% had private or public insurance coverage for self-monitoring testing supplies. Patients with insurance had significantly lower hemoglobin A1c concentrations than those without insurance coverage (7.1% v. 7.4%, p = 0.03). Patients with insurance were younger, had a higher income, were less likely to have a high school education and were less likely to be married or living with a partner. In multivariate analyses that controlled for these and other potential confounders, lack of insurance coverage for self-monitoring testing supplies was still significantly associated with higher hemoglobin A1c concentrations (adjusted difference 0.5%, p = 0.006). Interpretation Patients without insurance for self-monitoring test strips had poorer glycemic control. PMID:15238494

  4. Prevalence of syphilis in pregnancy and prenatal syphilis testing in Brazil: birth in Brazil study.

    PubMed

    Domingues, Rosa Maria Soares Madeira; Szwarcwald, Celia Landmann; Souza Junior, Paulo Roberto Borges; Leal, Maria do Carmo

    2014-10-01

    Determine the coverage rate of syphilis testing during prenatal care and the prevalence of syphilis in pregnant women in Brazil. This is a national hospital-based cohort study conducted in Brazil with 23,894 postpartum women between 2011 and 2012. Data were obtained using interviews with postpartum women, hospital records, and prenatal care cards. All postpartum women with a reactive serological test result recorded in the prenatal care card or syphilis diagnosis during hospitalization for childbirth were considered cases of syphilis in pregnancy. The Chi-square test was used for determining the disease prevalence and testing coverage rate by region of residence, self-reported skin color, maternal age, and type of prenatal and child delivery care units. Prenatal care covered 98.7% postpartum women. Syphilis testing coverage rate was 89.1% (one test) and 41.2% (two tests), and syphilis prevalence in pregnancy was 1.02% (95% CI 0.84; 1.25). A lower prenatal coverage rate was observed among women in the North region, indigenous women, those with less education, and those who received prenatal care in public health care units. A lower testing coverage rate was observed among residents in the North, Northeast, and Midwest regions, among younger and non-white skin-color women, among those with lower education, and those who received prenatal care in public health care units. An increased prevalence of syphilis was observed among women with < 8 years of education (1.74%), who self-reported as black (1.8%) or mixed (1.2%), those who did not receive prenatal care (2.5%), and those attending public (1.37%) or mixed (0.93%) health care units. The estimated prevalence of syphilis in pregnancy was similar to that reported in the last sentinel surveillance study conducted in 2006. There was an improvement in prenatal care and testing coverage rate, and the goals suggested by the World Health Organization were achieved in two regions. Regional and social inequalities in access to health care units, coupled with other gaps in health assistance, have led to the persistence of congenital syphilis as a major public health problem in Brazil.

  5. Prevalence of syphilis in pregnancy and prenatal syphilis testing in Brazil: Birth in Brazil study

    PubMed Central

    Domingues, Rosa Maria Soares Madeira; Szwarcwald, Celia Landmann; Souza, Paulo Roberto Borges; Leal, Maria do Carmo

    2014-01-01

    OBJECTIVE Determine the coverage rate of syphilis testing during prenatal care and the prevalence of syphilis in pregnant women in Brazil. METHODS This is a national hospital-based cohort study conducted in Brazil with 23,894 postpartum women between 2011 and 2012. Data were obtained using interviews with postpartum women, hospital records, and prenatal care cards. All postpartum women with a reactive serological test result recorded in the prenatal care card or syphilis diagnosis during hospitalization for childbirth were considered cases of syphilis in pregnancy. The Chi-square test was used for determining the disease prevalence and testing coverage rate by region of residence, self-reported skin color, maternal age, and type of prenatal and child delivery care units. RESULTS Prenatal care covered 98.7% postpartum women. Syphilis testing coverage rate was 89.1% (one test) and 41.2% (two tests), and syphilis prevalence in pregnancy was 1.02% (95%CI 0.84;1.25). A lower prenatal coverage rate was observed among women in the North region, indigenous women, those with less education, and those who received prenatal care in public health care units. A lower testing coverage rate was observed among residents in the North, Northeast, and Midwest regions, among younger and non-white skin-color women, among those with lower education, and those who received prenatal care in public health care units. An increased prevalence of syphilis was observed among women with < 8 years of education (1.74%), who self-reported as black (1.8%) or mixed (1.2%), those who did not receive prenatal care (2.5%), and those attending public (1.37%) or mixed (0.93%) health care units. CONCLUSIONS The estimated prevalence of syphilis in pregnancy was similar to that reported in the last sentinel surveillance study conducted in 2006. There was an improvement in prenatal care and testing coverage rate, and the goals suggested by the World Health Organization were achieved in two regions. Regional and social inequalities in access to health care units, coupled with other gaps in health assistance, have led to the persistence of congenital syphilis as a major public health problem in Brazil. PMID:25372167

  6. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  7. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    PubMed

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  8. Simulating the Effect of Spectroscopic MRI as a Metric for Radiation Therapy Planning in Patients with Glioblastoma

    PubMed Central

    Cordova, J. Scott; Kandula, Shravan; Gurbani, Saumya; Zhong, Jim; Tejani, Mital; Kayode, Oluwatosin; Patel, Kirtesh; Prabhu, Roshan; Schreibmann, Eduard; Crocker, Ian; Holder, Chad A.; Shim, Hyunsuk; Shu, Hui-Kuo

    2017-01-01

    Due to glioblastoma’s infiltrative nature, an optimal radiation therapy (RT) plan requires targeting infiltration not identified by anatomical magnetic resonance imaging (MRI). Here, high-resolution, whole-brain spectroscopic MRI (sMRI) is used to describe tumor infiltration alongside anatomical MRI and simulate the degree to which it modifies RT target planning. In 11 patients with glioblastoma, data from preRT sMRI scans were processed to give high-resolution, whole-brain metabolite maps normalized by contralateral white matter. Maps depicting choline to N-Acetylaspartate (Cho/NAA) ratios were registered to contrast-enhanced T1-weighted RT planning MRI for each patient. Volumes depicting metabolic abnormalities (1.5−, 1.75−, and 2.0-fold increases in Cho/NAA ratios) were compared with conventional target volumes and contrast-enhancing tumor at recurrence. sMRI-modified RT plans were generated to evaluate target volume coverage and organ-at-risk dose constraints. Conventional clinical target volumes and Cho/NAA abnormalities identified significantly different regions of microscopic infiltration with substantial Cho/NAA abnormalities falling outside of the conventional 60 Gy isodose line (41.1, 22.2, and 12.7 cm3, respectively). Clinical target volumes using Cho/NAA thresholds exhibited significantly higher coverage of contrast enhancement at recurrence on average (92.4%, 90.5%, and 88.6%, respectively) than conventional plans (82.5%). sMRI-based plans targeting tumor infiltration met planning objectives in all cases with no significant change in target coverage. In 2 cases, the sMRI-modified plan exhibited better coverage of contrast-enhancing tumor at recurrence than the original plan. Integration of the high-resolution, whole-brain sMRI into RT planning is feasible, resulting in RT target volumes that can effectively target tumor infiltration while adhering to conventional constraints. PMID:28105468

  9. Sensitivity, Specificity, and Predictive Values: Foundations, Pliabilities, and Pitfalls in Research and Practice.

    PubMed

    Trevethan, Robert

    2017-01-01

    Within the context of screening tests, it is important to avoid misconceptions about sensitivity, specificity, and predictive values. In this article, therefore, foundations are first established concerning these metrics along with the first of several aspects of pliability that should be recognized in relation to those metrics. Clarification is then provided about the definitions of sensitivity, specificity, and predictive values and why researchers and clinicians can misunderstand and misrepresent them. Arguments are made that sensitivity and specificity should usually be applied only in the context of describing a screening test's attributes relative to a reference standard; that predictive values are more appropriate and informative in actual screening contexts, but that sensitivity and specificity can be used for screening decisions about individual people if they are extremely high; that predictive values need not always be high and might be used to advantage by adjusting the sensitivity and specificity of screening tests; that, in screening contexts, researchers should provide information about all four metrics and how they were derived; and that, where necessary, consumers of health research should have the skills to interpret those metrics effectively for maximum benefit to clients and the healthcare system.

  10. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination.

    PubMed

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-05-01

    The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=-2.487 (-2.040 to -0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=-2.272 (-0.028 to -0.002). ANOVA reported significant differences across years of experience (0-1, 1-2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.

  11. Is heart rate variability better than routine vital signs for prehospital identification of major hemorrhage?

    PubMed

    Edla, Shwetha; Reisner, Andrew T; Liu, Jianbo; Convertino, Victor A; Carter, Robert; Reifman, Jaques

    2015-02-01

    During initial assessment of trauma patients, metrics of heart rate variability (HRV) have been associated with high-risk clinical conditions. Yet, despite numerous studies, the potential of HRV to improve clinical outcomes remains unclear. Our objective was to evaluate whether HRV metrics provide additional diagnostic information, beyond routine vital signs, for making a specific clinical assessment: identification of hemorrhaging patients who receive packed red blood cell (PRBC) transfusion. Adult prehospital trauma patients were analyzed retrospectively, excluding those who lacked a complete set of reliable vital signs and a clean electrocardiogram for computation of HRV metrics. We also excluded patients who did not survive to admission. The primary outcome was hemorrhagic injury plus different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and routine vital signs to test the hypothesis that HRV metrics could improve the diagnosis of hemorrhagic injury plus PRBC transfusion vs routine vital signs alone. As univariate predictors, HRV metrics in a data set of 402 subjects had comparable areas under receiver operating characteristic curves compared with routine vital signs. In multivariate regression models containing routine vital signs, HRV parameters were significant (P<.05) but yielded areas under receiver operating characteristic curves with minimal, nonsignificant improvements (+0.00 to +0.05). A novel diagnostic test should improve diagnostic thinking and allow for better decision making in a significant fraction of cases. Our findings do not support that HRV metrics add value over routine vital signs in terms of prehospital identification of hemorrhaging patients who receive PRBC transfusion. Published by Elsevier Inc.

  12. Comparison of administrative and survey data for estimating vitamin A supplementation and deworming coverage of children under five years of age in Sub-Saharan Africa.

    PubMed

    Janmohamed, Amynah; Doledec, David

    2017-07-01

    To compare administrative coverage data with results from household coverage surveys for vitamin A supplementation (VAS) and deworming campaigns conducted during 2010-2015 in 12 African countries. Paired t-tests examined differences between administrative and survey coverage for 52 VAS and 34 deworming dyads. Independent t-tests measured VAS and deworming coverage differences between data sources for door-to-door and fixed-site delivery strategies and VAS coverage differences between 6- to 11-month and 12- to 59-month age group. For VAS, administrative coverage was higher than survey estimates in 47 of 52 (90%) campaign rounds, with a mean difference of 16.1% (95% CI: 9.5-22.7; P < 0.001). For deworming, administrative coverage exceeded survey estimates in 31 of 34 (91%) comparisons, with a mean difference of 29.8% (95% CI: 16.9-42.6; P < 0.001). Mean ± SD differences in coverage between administrative and survey data were 12.2% ± 22.5% for the door-to-door delivery strategy and 25.9% ± 24.7% for the fixed-site model (P = 0.06). For deworming, mean ± SD differences in coverage between data sources were 28.1% ± 43.5% and 33.1% ± 17.9% for door-to-door and fixed-site distribution, respectively (P = 0.64). VAS administrative coverage was higher than survey estimates in 37 of 49 (76%) comparisons for the 6- to 11-month age group and 45 of 48 (94%) comparisons for the 12- to 59-month age group. Reliance on health facility data alone for calculating VAS and deworming coverage may mask low coverage and prevent measures to improve programmes. Countries should periodically validate administrative coverage estimates with population-based methods. © 2017 John Wiley & Sons Ltd.

  13. What did the Go4Health policy research project contribute to the policy discourse on the sustainable development goals? A reflexive review.

    PubMed

    Te, Vannarath; Floden, Nadia; Hussain, Sameera; Brolan, Claire E; Hill, Peter S

    2018-05-16

    In 2012, the European Commission funded Go4Health-Goals and Governance for Global Health, a consortium of 13 academic research and human rights institutions from both Global North and South-to track the evolution of the Sustainable Development Goals (SDGs), and provide ongoing policy advice. This paper reviews the research outputs published between 2012 and 2016, analyzing the thematic content of the publications, and the influence on global health and development discourse through citation metrics. Analysis of the 54 published papers showed 6 dominant themes related to the SDGs: the formulation process for the SDG health goal; the right to health; Universal Health Coverage; voices of marginalized peoples; global health governance; and the integration of health across the other SDGs. The papers combined advocacy---particularly for the right to health and its potential embodiment in Universal Health Coverage-with qualitative research and analysis of policy and stakeholders. Go4Health's publications on the right to health, global health governance and the voices of marginalized peoples in relation to the SDGs represented a substantial proportion of papers published for these topics. Go4Health analysis of the right to health clarified its elements and their application to Universal Health Coverage, global health governance, financing the SDGs and access to medicines. Qualitative research identified correspondence between perceptions of marginalized peoples and right to health principles, and reluctance among multilateral organizations to explicitly represent the right to health in the goals, despite their acknowledgement of their importance. Citation metrics analysis confirmed an average of 5.5 citations per paper, with a field-weighted citation impact of 2.24 for the 43 peer reviewed publications. Citations in the academic literature and UN policy documents confirmed the impact of Go4Health on the global discourse around the SDGs, but within the Go4Health consortium there was also evidence of two epistemological frames of analysis-normative legal analysis and empirical research-that created productive synergies in unpacking the health SDG and the right to health. The analysis offers clear evidence for the contribution of funded programmatic research-such as the Go4Health project-to the global health discourse.

  14. Launch Vehicle Production and Operations Cost Metrics

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Neeley, James R.; Blackburn, Ruby F.

    2014-01-01

    Traditionally, launch vehicle cost has been evaluated based on $/Kg to orbit. This metric is calculated based on assumptions not typically met by a specific mission. These assumptions include the specified orbit whether Low Earth Orbit (LEO), Geostationary Earth Orbit (GEO), or both. The metric also assumes the payload utilizes the full lift mass of the launch vehicle, which is rarely true even with secondary payloads.1,2,3 Other approaches for cost metrics have been evaluated including unit cost of the launch vehicle and an approach to consider the full program production and operations costs.4 Unit cost considers the variable cost of the vehicle and the definition of variable costs are discussed. The full program production and operation costs include both the variable costs and the manufacturing base. This metric also distinguishes operations costs from production costs, including pre-flight operational testing. Operations costs also consider the costs of flight operations, including control center operation and maintenance. Each of these 3 cost metrics show different sensitivities to various aspects of launch vehicle cost drivers. The comparison of these metrics provides the strengths and weaknesses of each yielding an assessment useful for cost metric selection for launch vehicle programs.

  15. Persuasive communication: A theoretical model for changing the attitude of preservice elementary teachers toward metric conversion

    NASA Astrophysics Data System (ADS)

    Shrigley, Robert L.

    This study was based on Hovland's four-part statement, Who says what to whom with what effect, the rationale for persuasive communication, a theoretical model for modifying attitudes. Part I was a survey of 139 perservice elementary teachers from which were generated the more credible characteristics of metric instructors, a central element in the who component of Hovland's model. They were: (1) background in mathematics and science, (2) fluency in metrics, (3) capability of thinking metrically, (4) a record of excellent teaching, (5) previous teaching of metric measurement to children, (6) responsibility for teaching metric content in methods courses and (7) an open enthusiasm for metric conversion. Part II was a survey of 45 mathematics educators where belief statements were synthesized for the what component of Hovland's model. It found that math educators support metric measurement because: (1) it is consistent with our monetary system; (2) the conversion of units is easier into metric than English; (3) it is easier to teach and easier to learn than English measurement; there is less need for common fractions; (4) most nations use metric measurement; scientists have used it for decades; (5) American industry has begun to use it; (6) metric measurement will facilitate world trade and communication; and (7) American children will need it as adults; educational agencies are mandating it. With the who and what of Hovland's four-part statement defined, educational researchers now have baseline data to use in testing experimentally the effect of persuasive communication on the attitude of preservice teachers toward metrication.

  16. Sensitivity of the lane change test as a measure of in-vehicle system demand.

    PubMed

    Young, Kristie L; Lenné, Michael G; Williamson, Amy R

    2011-05-01

    The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Maximizing sensitivity of the psychomotor vigilance test (PVT) to sleep loss.

    PubMed

    Basner, Mathias; Dinges, David F

    2011-05-01

    The psychomotor vigilance test (PVT) is among the most widely used measures of behavioral alertness, but there is large variation among published studies in PVT performance outcomes and test durations. To promote standardization of the PVT and increase its sensitivity and specificity to sleep loss, we determined PVT metrics and task durations that optimally discriminated sleep deprived subjects from alert subjects. Repeated-measures experiments involving 10-min PVT assessments every 2 h across both acute total sleep deprivation (TSD) and 5 days of chronic partial sleep deprivation (PSD). Controlled laboratory environment. 74 healthy subjects (34 female), aged 22-45 years. TSD experiment involving 33 h awake (N = 31 subjects) and a PSD experiment involving 5 nights of 4 h time in bed (N = 43 subjects). In a paired t-test paradigm and for both TSD and PSD, effect sizes of 10 different PVT performance outcomes were calculated. Effect sizes were high for both TSD (1.59-1.94) and PSD (0.88-1.21) for PVT metrics related to lapses and to measures of psychomotor speed, i.e., mean 1/RT (response time) and mean slowest 10% 1/RT. In contrast, PVT mean and median RT outcomes scored low to moderate effect sizes influenced by extreme values. Analyses facilitating only portions of the full 10-min PVT indicated that for some outcomes, high effect sizes could be achieved with PVT durations considerably shorter than 10 min, although metrics involving lapses seemed to profit from longer test durations in TSD. Due to their superior conceptual and statistical properties and high sensitivity to sleep deprivation, metrics involving response speed and lapses should be considered primary outcomes for the 10-min PVT. In contrast, PVT mean and median metrics, which are among the most widely used outcomes, should be avoided as primary measures of alertness. Our analyses also suggest that some shorter-duration PVT versions may be sensitive to sleep loss, depending on the outcome variable selected, although this will need to be confirmed in comparative analyses of separate duration versions of the PVT. Using both sensitive PVT metrics and optimal test durations maximizes the sensitivity of the PVT to sleep loss and therefore potentially decreases the sample size needed to detect the same neurobehavioral deficit. We propose criteria to better standardize the 10-min PVT and facilitate between-study comparisons and meta-analyses.

  18. Metric analysis and data validation across FORTRAN projects

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.; Phillips, Tsai-Yun

    1983-01-01

    The desire to predict the effort in developing or explaining the quality of software has led to the proposal of several metrics. As a step toward validating these metrics, the Software Engineering Laboratory (SEL) has analyzed the software science metrics, cyclomatic complexity, and various standard program measures for their relation to effort (including design through acceptance testing), development errors (both discrete and weighted according to the amount of time to locate and fix), and one another. The data investigated are collected from a project FORTRAN environment and examined across several projects at once, within individual projects and by reporting accuracy checks demonstrating the need to validate a database. When the data comes from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither software science's E metric cyclomatic complexity not source lines of code appears to relate convincingly better with effort than the others.

  19. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  20. USING BROAD-SCALE METRICS TO DEVELOP INDICATORS OF WATERSHED VULNERABILITY IN THE OZARK MOUNTAINS (USA)

    EPA Science Inventory

    Multiple broad-scale landscape metrics were tested as potential indicators of total phosphorus (TP) concentration, total ammonia (TA) concentration, and Escherichia coli (E. coli) bacteria count, among 244 sub-watersheds in the Ozark Mountains (USA). Indicator models were develop...

  1. Immunization coverage among Hispanic ancestry, 2003 National Immunization Survey.

    PubMed

    Darling, Natalie J; Barker, Lawrence E; Shefer, Abigail M; Chu, Susan Y

    2005-12-01

    The Hispanic population is increasing and heterogeneous (Hispanic refers to persons of Spanish, Hispanic, or Latino descent). The objective was to examine immunization rates among Hispanic ancestry for the 4:3:1:3:3 series (> or = 4 doses diphtheria, tetanus toxoids, and pertussis vaccine; > or = 3 doses poliovirus vaccine; > or = 1 doses measles-containing vaccine; > or = 3 doses Haemophilus influenzae type b vaccine; and > or = 3 doses hepatitis B vaccine). The National Immunization Survey measures immunization coverage among 19- to 35-month-old U.S. children. Coverage was compared from combined 2001-2003 data among Hispanics and non-Hispanic whites using t-tests, and among Hispanic ancestry using a chi-square test. Hispanics were categorized as Mexican, Mexican American, Central American, South American, Puerto Rican, Cuban, Spanish Caribbean (primarily Dominican Republic), other, and multiple ancestry. Children of Hispanic ancestry increased from 21% in 1999 to 25% in 2003. These Hispanic children were less well immunized than non-Hispanic whites (77.0%, +/-2.1% [95% confidence interval] compared to 82.5%, +/-1.1% (95% CI) > in 2003). Immunization coverage did not vary significantly among Hispanics of varying ancestries (p=0.26); however, there was substantial geographic variability. In some areas, immunization coverage among Hispanics was significantly higher than non-Hispanic whites. Hispanic children were less well immunized than non-Hispanic whites; however, coverage varied notably by geographic area. Although a chi-square test found no significant differences in coverage among Hispanic ancestries, the range of coverage, 79.2%, +/-5.1% for Cuban Americans to 72.1%, +/-2.4% for Mexican descent, may suggest a need for improved and more localized monitoring among Hispanic communities.

  2. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    NASA Astrophysics Data System (ADS)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  3. Evaluation of solid particle number and black carbon for very low particulate matter emissions standards in light-duty vehicles.

    PubMed

    Chang, M-C Oliver; Shields, J Erin

    2017-06-01

    To reliably measure at the low particulate matter (PM) levels needed to meet California's Low Emission Vehicle (LEV III) 3- and 1-mg/mile particulate matter (PM) standards, various approaches other than gravimetric measurement have been suggested for testing purposes. In this work, a feasibility study of solid particle number (SPN, d50 = 23 nm) and black carbon (BC) as alternatives to gravimetric PM mass was conducted, based on the relationship of these two metrics to gravimetric PM mass, as well as the variability of each of these metrics. More than 150 Federal Test Procedure (FTP-75) or Supplemental Federal Test Procedure (US06) tests were conducted on 46 light-duty vehicles, including port-fuel-injected and direct-injected gasoline vehicles, as well as several light-duty diesel vehicles equipped with diesel particle filters (LDD/DPF). For FTP tests, emission variability of gravimetric PM mass was found to be slightly less than that of either SPN or BC, whereas the opposite was observed for US06 tests. Emission variability of PM mass for LDD/DPF was higher than that of both SPN and BC, primarily because of higher PM mass measurement uncertainties (background and precision) near or below 0.1 mg/mile. While strong correlations were observed from both SPN and BC to PM mass, the slopes are dependent on engine technologies and driving cycles, and the proportionality between the metrics can vary over the course of the test. Replacement of the LEV III PM mass emission standard with one other measurement metric may imperil the effectiveness of emission reduction, as a correlation-based relationship may evolve over future technologies for meeting stringent greenhouse standards. Solid particle number and black carbon were suggested in place of PM mass for the California LEV III 1-mg/mile FTP standard. Their equivalence, proportionality, and emission variability in comparison to PM mass, based on a large light-duty vehicle fleet examined, are dependent on engine technologies and driving cycles. Such empirical derived correlations exhibit the limitation of using these metrics for enforcement and certification standards as vehicle combustion and after-treatment technologies advance.

  4. Quality of Service Metrics in Wireless Sensor Networks: A Survey

    NASA Astrophysics Data System (ADS)

    Snigdh, Itu; Gupta, Nisha

    2016-03-01

    Wireless ad hoc network is characterized by autonomous nodes communicating with each other by forming a multi hop radio network and maintaining connectivity in a decentralized manner. This paper presents a systematic approach to the interdependencies and the analogy of the various factors that affect and constrain the wireless sensor network. This article elaborates the quality of service parameters in terms of methods of deployment, coverage and connectivity which affect the lifetime of the network that have been addressed, till date by the different literatures. The analogy of the indispensable rudiments was discussed that are important factors to determine the varied quality of service achieved, yet have not been duly focused upon.

  5. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  6. Assessing Sustainability When Data Availability Limits Real-Time Estimates: Using Near-Time Indicators to Extend Sustainability Metrics

    EPA Science Inventory

    We produced a scientifically defensible methodology to assess whether a regional system is on a sustainable path. The approach required readily available data, metrics applicable to the relevant scale, and results useful to decision makers. We initiated a pilot project to test ...

  7. Determination of selection criteria for spray drift reduction from atomization data

    USDA-ARS?s Scientific Manuscript database

    When testing and evaluating drift reduction technologies (DRT), there are different metrics that can be used to determine if the technology reduces drift as compared to a reference system. These metrics can include reduction in percent of fine drops, measured spray drift from a field trial, or comp...

  8. The Consequences of Using One Assessment System to Pursue Two Objectives

    ERIC Educational Resources Information Center

    Neal, Derek

    2013-01-01

    Education officials often use one assessment system both to create measures of student achievement and to create performance metrics for educators. However, modern standardized testing systems are not designed to produce performance metrics for teachers or principals. They are designed to produce reliable measures of individual student achievement…

  9. Development of a Quantitative Decision Metric for Selecting the Most Suitable Discretization Method for SN Transport Problems

    NASA Astrophysics Data System (ADS)

    Schunert, Sebastian

    In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy/efficiency, resilience against negative fluxes, and possession of the thick diffusion limit, separately. The choice of the most efficient method depends on the utilized error norm: in Lp error norms higher order methods such as the AHOTN method of order three perform best, while for computing integral quantities the linear nodal (LN) method is most efficient. The most resilient method against occurrence of negative fluxes is the simple corner balance (SCB) method. A validation of the quantitative decision metric is performed based on the NEA box-inbox suite of test problems. The validation exercise comprises two stages: first prediction of the contending methods' performance via the decision metric and second computing the actual scores based on data obtained from the NEA benchmark problem. The comparison of predicted and actual scores via a penalty function (ratio of predicted best performer's score to actual best score) completes the validation exercise. It is found that the decision metric is capable of very accurate predictions (penalty < 10%) in more than 83% of the considered cases and features penalties up to 20% for the remaining cases. An exception to this rule is the third test case NEA-III intentionally set up to incorporate a poor match of the benchmark with the "data" problems. However, even under these worst case conditions the decision metric's suggestions are never detrimental. Suggestions for improving the decision metric's accuracy are to increase the pool of employed data, to refine the mapping of a given configuration to a case in the database, and to better characterize the desired target quantities.

  10. The Profile Envision and Splicing Tool (PRESTO): Developing an Atmospheric Wind Analysis Tool for Space Launch Vehicles Using Python

    NASA Technical Reports Server (NTRS)

    Orcutt, John M.; Barbre, Robert E., Jr.; Brenton, James C.; Decker, Ryan K.

    2017-01-01

    Launch vehicle programs require vertically complete atmospheric profiles. Many systems at the ER to make the necessary measurements, but all have different EVR, vertical coverage, and temporal coverage. MSFC Natural Environments Branch developed a tool to create a vertically complete profile from multiple inputs using Python. Forward work: Finish Formal Testing Acceptance Testing, End-to-End Testing. Formal Release

  11. Field Test of Expedient Pavement Repairs (Test Items 16-35).

    DTIC Science & Technology

    1980-11-01

    82 61 Surface Profiles After Repairs, Item 34 ............ ... 83 62 Cracking of Bond, Item 34 ..... ................. . 84 ix JI x LIST OF...Limestone Base Course .... ............... . 79 18 Summary of Test Results ...... .................. . 88 x ABBREVIATIONS AND NOMENCLATURE Abbreviations AFESC...coverage$ Lateral quarter X - M5 coverages -020- - 0.251 +0.10- centerline -0.05- 0 0 I> + o.IO_ quarter point -0.10 -0.20- - 0.25+ TRAFFIC ZONES . Lonituina

  12. Maximizing Sensitivity of the Psychomotor Vigilance Test (PVT) to Sleep Loss

    PubMed Central

    Basner, Mathias; Dinges, David F.

    2011-01-01

    Study Objectives: The psychomotor vigilance test (PVT) is among the most widely used measures of behavioral alertness, but there is large variation among published studies in PVT performance outcomes and test durations. To promote standardization of the PVT and increase its sensitivity and specificity to sleep loss, we determined PVT metrics and task durations that optimally discriminated sleep deprived subjects from alert subjects. Design: Repeated-measures experiments involving 10-min PVT assessments every 2 h across both acute total sleep deprivation (TSD) and 5 days of chronic partial sleep deprivation (PSD). Setting: Controlled laboratory environment. Participants: 74 healthy subjects (34 female), aged 22–45 years. Interventions: TSD experiment involving 33 h awake (N = 31 subjects) and a PSD experiment involving 5 nights of 4 h time in bed (N = 43 subjects). Measurements and Results: In a paired t-test paradigm and for both TSD and PSD, effect sizes of 10 different PVT performance outcomes were calculated. Effect sizes were high for both TSD (1.59–1.94) and PSD (0.88–1.21) for PVT metrics related to lapses and to measures of psychomotor speed, i.e., mean 1/RT (response time) and mean slowest 10% 1/RT. In contrast, PVT mean and median RT outcomes scored low to moderate effect sizes influenced by extreme values. Analyses facilitating only portions of the full 10-min PVT indicated that for some outcomes, high effect sizes could be achieved with PVT durations considerably shorter than 10 min, although metrics involving lapses seemed to profit from longer test durations in TSD. Conclusions: Due to their superior conceptual and statistical properties and high sensitivity to sleep deprivation, metrics involving response speed and lapses should be considered primary outcomes for the 10-min PVT. In contrast, PVT mean and median metrics, which are among the most widely used outcomes, should be avoided as primary measures of alertness. Our analyses also suggest that some shorter-duration PVT versions may be sensitive to sleep loss, depending on the outcome variable selected, although this will need to be confirmed in comparative analyses of separate duration versions of the PVT. Using both sensitive PVT metrics and optimal test durations maximizes the sensitivity of the PVT to sleep loss and therefore potentially decreases the sample size needed to detect the same neurobehavioral deficit. We propose criteria to better standardize the 10-min PVT and facilitate between-study comparisons and meta-analyses. Citation: Basner M; Dinges DF. Maximizing sensitivity of the psychomotor vigilance test (PVT) to sleep loss. SLEEP 2011;34(5):581-591. PMID:21532951

  13. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  14. Seasonal climate signals from multiple tree ring metrics: A case study of Pinus ponderosa in the upper Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Dannenberg, Matthew P.; Wise, Erika K.

    2016-04-01

    Projected changes in the seasonality of hydroclimatic regimes are likely to have important implications for water resources and terrestrial ecosystems in the U.S. Pacific Northwest. The tree ring record, which has frequently been used to position recent changes in a longer-term context, typically relies on signals embedded in the total ring width of tree rings. Additional climatic inferences at a subannual temporal scale can be made using alternative tree ring metrics such as earlywood and latewood widths and the density of tree ring latewood. Here we examine seasonal precipitation and temperature signals embedded in total ring width, earlywood width, adjusted latewood width, and blue intensity chronologies from a network of six Pinus ponderosa sites in and surrounding the upper Columbia River Basin of the U.S. Pacific Northwest. We also evaluate the potential for combining multiple tree ring metrics together in reconstructions of past cool- and warm-season precipitation. The common signal among all metrics and sites is related to warm-season precipitation. Earlywood and latewood widths differ primarily in their sensitivity to conditions in the year prior to growth. Total and earlywood widths from the lowest elevation sites also reflect cool-season moisture. Effective correlation analyses and composite-plus-scale tests suggest that combining multiple tree ring metrics together may improve reconstructions of warm-season precipitation. For cool-season precipitation, total ring width alone explains more variance than any other individual metric or combination of metrics. The composite-plus-scale tests show that variance-scaled precipitation reconstructions in the upper Columbia River Basin may be asymmetric in their ability to capture extreme events.

  15. Repeatability of FDG PET/CT metrics assessed in free breathing and deep inspiration breath hold in lung cancer patients.

    PubMed

    Nygård, Lotte; Aznar, Marianne C; Fischer, Barbara M; Persson, Gitte F; Christensen, Charlotte B; Andersen, Flemming L; Josipovic, Mirjana; Langer, Seppo W; Kjær, Andreas; Vogelius, Ivan R; Bentzen, Søren M

    2018-01-01

    We measured the repeatability of FDG PET/CT uptake metrics when acquiring scans in free breathing (FB) conditions compared with deep inspiration breath hold (DIBH) for locally advanced lung cancer. Twenty patients were enrolled in this prospective study. Two FDG PET/CT scans per patient were conducted few days apart and in two breathing conditions (FB and DIBH). This resulted in four scans per patient. Up to four FDG PET avid lesions per patient were contoured. The following FDG metrics were measured in all lesions and in all four scans: Standardized uptake value (SUV) peak , SUV max , SUV mean , metabolic tumor volume (MTV) and total lesion glycolysis (TLG), based on an isocontur of 50% of SUV max . FDG PET avid volumes were delineated by a nuclear medicine physician. The gross tumor volumes (GTV) were contoured on the corresponding CT scans. Nineteen patients were available for analysis. Test-retest standard deviations of FDG uptake metrics in FB and DIBH were: SUV peak FB/DIBH: 16.2%/16.5%; SUV max : 18.2%/22.1%; SUV mean : 18.3%/22.1%; TLG: 32.4%/40.5%. DIBH compared to FB resulted in higher values with mean differences in SUV max of 12.6%, SUV peak 4.4% and SUV mean 11.9%. MTV, TLG and GTV were all significantly smaller on day 1 in DIBH compared to FB. However, the differences between metrics under FB and DIBH were in all cases smaller than 1 SD of the day to day repeatability. FDG acquisition in DIBH does not have a clinically relevant impact on the uptake metrics and does not improve the test-retest repeatability of FDG uptake metrics in lung cancer patients.

  16. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination

    PubMed Central

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-01-01

    Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005

  17. Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.

    PubMed

    Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B

    2017-12-01

    In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were able to be developed for use in the ACC's quality efforts for ambulatory practice. © 2017 Wiley Periodicals, Inc.

  18. A direct-gradient multivariate index of biotic condition

    USGS Publications Warehouse

    Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.

    2012-01-01

    Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.

  19. Combining ground-based measurements and satellite-based spectral vegetation indices to track biomass accumulation in post-fire chaparral

    NASA Astrophysics Data System (ADS)

    Uyeda, K. A.; Stow, D. A.; Roberts, D. A.; Riggan, P. J.

    2015-12-01

    Multi-temporal satellite imagery can provide valuable information on patterns of vegetation growth over large spatial extents and long time periods, but corresponding ground-referenced biomass information is often difficult to acquire, especially at an annual scale. In this study, I test the relationship between annual biomass estimated using shrub growth rings and metrics of seasonal growth derived from Moderate Resolution Imaging Spectroradiometer (MODIS) spectral vegetation indices (SVIs) for a small area of southern California chaparral to evaluate the potential for mapping biomass at larger spatial extents. The site had most recently burned in 2002, and annual biomass accumulation measurements were available from years 5 - 11 post-fire. I tested metrics of seasonal growth using six SVIs (Normalized Difference Vegetation Index, Enhanced Vegetation Index, Soil Adjusted Vegetation Index, Normalized Difference Water Index, Normalized Difference Infrared Index 6, and Vegetation Atmospherically Resistant Index). While additional research would be required to determine which of these metrics and SVIs are most promising over larger spatial extents, several of the seasonal growth metrics/ SVI combinations have a very strong relationship with annual biomass, and all SVIs have a strong relationship with annual biomass for at least one of the seasonal growth metrics.

  20. An index of ecological integrity for the Mississippi alluvial plain ecoregion: index development and relations to selected landscape variables

    USGS Publications Warehouse

    Justus, B.G.

    2003-01-01

    Macroinvertebrate community, fish community, water-quality, and habitat data collected from 36 sites in the Mississippi Alluvial Plain Ecoregion during 1996-98 by the U.S. Geological Survey were considered for a multimetric test of ecological integrity. Test metrics were correlated to site scores of a Detrended Correspondence Analysis of the fish community (the biological community that was the most statistically significant for indicating ecological conditions in the ecoregion) and six metrics--four fish metrics, one chemical metric (total ammonia plus organic nitrogen) and one physical metric (turbidity)--having the highest correlations were selected for the index. Index results indicate that sites in the northern half of the study unit (in Arkansas and Missouri) were less degraded than sites in the southern half of the study unit (in Louisiana and Mississippi). Of 148 landscape variables evaluated, the percentage of Holocene deposits and cotton insecticide rates had the highest correlations to index of ecological integrity results. sites having the highest (best) index scores had the lowest percentages of Holocene deposits and the lowest cotton insecticide use rates, indicating that factors relating to the amount of Holocene deposits and cotton insecticide use rates partially explain differences in ecological conditions throughout the Mississippi Alluvial Plain Ecoregion.

  1. A comparison of color fidelity metrics for light sources using simulation of color samples under lighting conditions

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Kang, Yoojin; Jang, Junwoo

    2017-09-01

    Color fidelity has been used as one of indices to evaluate the performance of light sources. Since the Color Rendering Index (CRI) was proposed at CIE, many color fidelity metrics have been proposed to increase the accuracy of the metric. This paper focuses on a comparison of the color fidelity metrics in an aspect of accuracy with human visual assessments. To visually evaluate the color fidelity of light sources, we made a simulator that reproduces the color samples under lighting conditions. In this paper, eighteen color samples of the Macbeth color checker under test light sources and reference illuminant for each of them are simulated and displayed on a well-characterized monitor. With only a spectrum set of the test light source and reference illuminant, color samples under any lighting condition can be reproduced. In this paper, the spectrums of the two LED and two OLED light sources that have similar values of CRI are used for the visual assessment. In addition, the results of the visual assessment are compared with the two color fidelity metrics that include CRI and IES TM-30-15 (Rf), proposed by Illuminating Engineering Society (IES) in 2015. Experimental results indicate that Rf outperforms CRI in terms of the correlation with visual assessment.

  2. "Publish or Perish" as citation metrics used to analyze scientific output in the humanities: International case studies in economics, geography, social sciences, philosophy, and history.

    PubMed

    Baneyx, Audrey

    2008-01-01

    Traditionally, the most commonly used source of bibliometric data is the Thomson ISI Web of Knowledge, in particular the (Social) Science Citation Index and the Journal Citation Reports, which provide the yearly Journal Impact Factors. This database used for the evaluation of researchers is not advantageous in the humanities, mainly because books, conference papers, and non-English journals, which are an important part of scientific activity, are not (well) covered. This paper presents the use of an alternative source of data, Google Scholar, and its benefits in calculating citation metrics in the humanities. Because of its broader range of data sources, the use of Google Scholar generally results in more comprehensive citation coverage in the humanities. This presentation compares and analyzes some international case studies with ISI Web of Knowledge and Google Scholar. The fields of economics, geography, social sciences, philosophy, and history are focused on to illustrate the differences of results between these two databases. To search for relevant publications in the Google Scholar database, the use of "Publish or Perish" and of CleanPoP, which the author developed to clean the results, are compared.

  3. Evaluation of satellite-retrieved extreme precipitation using gauge observations

    NASA Astrophysics Data System (ADS)

    Lockhoff, M.; Zolina, O.; Simmer, C.; Schulz, J.

    2012-04-01

    Precipitation extremes have already been intensively studied employing rain gauge datasets. Their main advantage is that they represent a direct measurement with a relatively high temporal coverage. Their main limitation however is their poor spatial coverage and thus a low representativeness in many parts of the world. In contrast, satellites can provide global coverage and there are meanwhile data sets available that are on one hand long enough to be used for extreme value analysis and that have on the other hand the necessary spatial and temporal resolution to capture extremes. However, satellite observations provide only an indirect mean to determine precipitation and there are many potential observational and methodological weaknesses in particular over land surfaces that may constitute doubts concerning their usability for the analysis of precipitation extremes. By comparing basic climatological metrics of precipitation (totals, intensities, number of wet days) as well as respective characteristics of PDFs, absolute and relative extremes of satellite and observational data this paper aims at assessing to which extent satellite products are suitable for analysing extreme precipitation events. In a first step the assessment focuses on Europe taking into consideration various satellite products available, e.g. data sets provided by the Global Precipitation Climatology Project (GPCP). First results indicate that satellite-based estimates do not only represent the monthly averaged precipitation very similar to rain gauge estimates but they also capture the day-to-day occurrence fairly well. Larger differences can be found though when looking at the corresponding intensities.

  4. The influence of soil properties and nutrients on conifer forest growth in Sweden, and the first steps in developing a nutrient availability metric

    NASA Astrophysics Data System (ADS)

    Van Sundert, Kevin; Horemans, Joanna A.; Stendahl, Johan; Vicca, Sara

    2018-06-01

    The availability of nutrients is one of the factors that regulate terrestrial carbon cycling and modify ecosystem responses to environmental changes. Nonetheless, nutrient availability is often overlooked in climate-carbon cycle studies because it depends on the interplay of various soil factors that would ideally be comprised into metrics applicable at large spatial scales. Such metrics do not currently exist. Here, we use a Swedish forest inventory database that contains soil data and tree growth data for > 2500 forests across Sweden to (i) test which combination of soil factors best explains variation in tree growth, (ii) evaluate an existing metric of constraints on nutrient availability, and (iii) adjust this metric for boreal forest data. With (iii), we thus aimed to provide an adjustable nutrient metric, applicable for Sweden and with potential for elaboration to other regions. While taking into account confounding factors such as climate, N deposition, and soil oxygen availability, our analyses revealed that the soil organic carbon concentration (SOC) and the ratio of soil carbon to nitrogen (C : N) were the most important factors explaining variation in normalized (climate-independent) productivity (mean annual volume increment - m3 ha-1 yr-1) across Sweden. Normalized forest productivity was significantly negatively related to the soil C : N ratio (R2 = 0.02-0.13), while SOC exhibited an empirical optimum (R2 = 0.05-0.15). For the metric, we started from a (yet unvalidated) metric for constraints on nutrient availability that was previously developed by the International Institute for Applied Systems Analysis (IIASA - Laxenburg, Austria) for evaluating potential productivity of arable land. This IIASA metric requires information on soil properties that are indicative of nutrient availability (SOC, soil texture, total exchangeable bases - TEB, and pH) and is based on theoretical considerations that are also generally valid for nonagricultural ecosystems. However, the IIASA metric was unrelated to normalized forest productivity across Sweden (R2 = 0.00-0.01) because the soil factors under consideration were not optimally implemented according to the Swedish data, and because the soil C : N ratio was not included. Using two methods (each one based on a different way of normalizing productivity for climate), we adjusted this metric by incorporating soil C : N and modifying the relationship between SOC and nutrient availability in view of the observed relationships across our database. In contrast to the IIASA metric, the adjusted metrics explained some variation in normalized productivity in the database (R2 = 0.03-0.21; depending on the applied method). A test for five manually selected local fertility gradients in our database revealed a significant and stronger relationship between the adjusted metrics and productivity for each of the gradients (R2 = 0.09-0.38). This study thus shows for the first time how nutrient availability metrics can be evaluated and adjusted for a particular ecosystem type, using a large-scale database.

  5. Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis

    DTIC Science & Technology

    2014-10-01

    test events, quality of service and other key metrics of military systems and networks are evaluated. Network data captured in standard flow formats...mentioned here. The Ozone Widget Framework (Next Century, n.d.) has proven to be very useful. Also, an extensive, clean, and optimized JavaScript ...library for visualizing many types of data can be found in D3–Data Driven Documents (Bostock, 2013). Quality of Service from Flow Two essential metrics of

  6. Treatment of gingival recession defects with a coronally advanced flap and a xenogeneic collagen matrix: a multicenter randomized clinical trial.

    PubMed

    Jepsen, Karin; Jepsen, Søren; Zucchelli, Giovanni; Stefanini, Martina; de Sanctis, Massimo; Baldini, Nicola; Greven, Björn; Heinz, Bernd; Wennström, Jan; Cassel, Björn; Vignoletti, Fabio; Sanz, Mariano

    2013-01-01

    To evaluate the clinical outcomes of the use of a xenogeneic collagen matrix (CM) in combination with the coronally advanced flap (CAF) in the treatment of localized recession defects. In a multicentre single-blinded, randomized, controlled, split-mouth trial, 90 recessions (Miller I, II) in 45 patients received either CAF + CM or CAF alone. At 6 months, root coverage (primary outcome) was 75.29% for test and 72.66% for control defects (p = 0.169), with 36% of test and 31% of control defects exhibiting complete coverage. The increase in mean width of keratinized tissue (KT) was higher in test (from 1.97 to 2.90 mm) than in control defects (from 2.00 to 2.57 mm) (p = 0.036). Likewise, test sites had more gain in gingival thickness (GT) (0.59 mm) than control sites (0.34 mm) (p = 0.003). Larger (≥3 mm) recessions (n = 35 patients) treated with CM showed higher root coverage (72.03% versus 66.16%, p = 0.043), as well as more gain in KT and GT. CAF + CM was not superior with regard to root coverage, but enhanced gingival thickness and width of keratinized tissue when compared with CAF alone. For the coverage of larger defects, CAF + CM was more effective. © 2012 John Wiley & Sons A/S.

  7. A Geometric Approach to Modeling Microstructurally Small Fatigue Crack Formation. 2; Simulation and Prediction of Crack Nucleation in AA 7075-T651

    NASA Technical Reports Server (NTRS)

    Hochhalter, Jake D.; Littlewood, David J.; Christ, Robert J., Jr.; Veilleux, M. G.; Bozek, J. E.; Ingraffea, A. R.; Maniatty, Antionette M.

    2010-01-01

    The objective of this paper is to develop further a framework for computationally modeling microstructurally small fatigue crack growth in AA 7075-T651 [1]. The focus is on the nucleation event, when a crack extends from within a second-phase particle into a surrounding grain, since this has been observed to be an initiating mechanism for fatigue crack growth in this alloy. It is hypothesized that nucleation can be predicted by computing a non-local nucleation metric near the crack front. The hypothesis is tested by employing a combination of experimentation and nite element modeling in which various slip-based and energy-based nucleation metrics are tested for validity, where each metric is derived from a continuum crystal plasticity formulation. To investigate each metric, a non-local procedure is developed for the calculation of nucleation metrics in the neighborhood of a crack front. Initially, an idealized baseline model consisting of a single grain containing a semi-ellipsoidal surface particle is studied to investigate the dependence of each nucleation metric on lattice orientation, number of load cycles, and non-local regularization method. This is followed by a comparison of experimental observations and computational results for microstructural models constructed by replicating the observed microstructural geometry near second-phase particles in fatigue specimens. It is found that orientation strongly influences the direction of slip localization and, as a result, in uences the nucleation mechanism. Also, the baseline models, replication models, and past experimental observation consistently suggest that a set of particular grain orientations is most likely to nucleate fatigue cracks. It is found that a continuum crystal plasticity model and a non-local nucleation metric can be used to predict the nucleation event in AA 7075-T651. However, nucleation metric threshold values that correspond to various nucleation governing mechanisms must be calibrated.

  8. Testing the Kerr metric with the iron line and the KRZ parametrization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ni, Yueying; Jiang, Jiachen; Bambi, Cosimo, E-mail: yyni13@fudan.edu.cn, E-mail: jcjiang12@fudan.edu.cn, E-mail: bambi@fudan.edu.cn

    The spacetime geometry around astrophysical black holes is supposed to be well approximated by the Kerr metric, but deviations from the Kerr solution are predicted in a number of scenarios involving new physics. Broad iron Kα lines are commonly observed in the X-ray spectrum of black holes and originate by X-ray fluorescence of the inner accretion disk. The profile of the iron line is sensitively affected by the spacetime geometry in the strong gravity region and can be used to test the Kerr black hole hypothesis. In this paper, we extend previous work in the literature. In particular: i )more » as test-metric, we employ the parametrization recently proposed by Konoplya, Rezzolla, and Zhidenko, which has a number of subtle advantages with respect to the existing approaches; ii ) we perform simulations with specific X-ray missions, and we consider NuSTAR as a prototype of current observational facilities and eXTP as an example of the next generation of X-ray observatories. We find a significant difference between the constraining power of NuSTAR and eXTP. With NuSTAR, it is difficult or impossible to constrain deviations from the Kerr metric. With eXTP, in most cases we can obtain quite stringent constraints (modulo we have the correct astrophysical model).« less

  9. It's All Relative: A Validation of Radiation Quality Comparison Metrics

    NASA Technical Reports Server (NTRS)

    Chappell, Lori J.; Milder, Caitlin M.; Elgart, S. Robin; Semones, Edward J.

    2017-01-01

    The difference between high-LET and low-LET radiation is quantified by a measure called relative biological effectiveness (RBE). RBE is defined as the ratio of the dose of a reference radiation to that of a test radiation to achieve the same effect level, and thus, is described either as an iso-effector dose-to-dose ratio. A single dose point is not sufficient to calculate an RBE value; therefore, studies with only one dose point usually calculate an effect-to-effect ratio. While not formally used in radiation protection, these iso-dose values may still be informative. Shuryak, et al 2017 investigated the use of an iso-dose metric termed "radiation effects ratio" (RER) and used both RBE and RER to estimate high-LET risks. To apply RBE or RER to risk prediction, the selected metric must be uniquely defined. That is, the calculated value must be consistent within a model given a constant set of constraints and assumptions, regardless of how effects are defined using statistical transformations from raw endpoint data. We first test the RBE and the RER to determine whether they are uniquely defined using transformations applied to raw data. Then, we test whether both metrics can predict heavy ion response data after simulated effect size scaling between human populations or when converting animal to human endpoints.

  10. Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.

    2013-01-01

    NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.

  11. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Huijun; Gordon, J. James; Siebers, Jeffrey V.

    2011-02-15

    Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structuresmore » meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters {omega} or {omega}{sub eff} and {delta}. Results: The accuracy of coverage estimates depends on angular and radial DMD sampling parameters {omega} or {omega}{sub eff} and {delta}, as well as the employed sampling technique. Target |{Delta}Q|<1% and OAR |{Delta}Q|<3% can be achieved with sampling parameters {omega} or {omega}{sub eff}=20 deg., {delta}=1 mm. Better accuracy (target |{Delta}Q|<0.5% and OAR |{Delta}Q|<{approx}1%) can be achieved with {omega} or {omega}{sub eff}=10 deg., {delta}=0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Conclusions: Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with {omega} or {omega}{sub eff}=10 deg. and {delta}=0.5 mm should be adequate for planning purposes.« less

  12. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters.

    PubMed

    Xu, Huijun; Gordon, J James; Siebers, Jeffrey V

    2011-02-01

    A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The accuracy of coverage estimates depends on angular and radial DMD sampling parameters omega or omega eff and delta, as well as the employed sampling technique. Target deltaQ/ < l% and OAR /deltaQ/ < 3% can be achieved with sampling parameters omega or omega eef = 20 degrees, delta =1 mm. Better accuracy (target /deltaQ < 0.5% and OAR /deltaQ < approximately 1%) can be achieved with omega or omega eff = 10 degrees, delta = 0.5 mm. As the number of sampling points decreases, the isotropic sampling method maintains better accuracy than fixed angular sampling. Coverage estimates for post-planning evaluation are essential since coverage values of targets and OARs often differ from the values implied by the static margin-based plans. Finer sampling of the DMD enables more accurate assessment of the effect of geometric uncertainties on coverage estimates prior to treatment. DMD sampling with omega or omega eff = 10 degrees and delta = 0.5 mm should be adequate for planning purposes.

  13. Electrochemical and pitting corrosion resistance of AISI 4145 steel subjected to massive laser shock peening treatment with different coverage layers

    NASA Astrophysics Data System (ADS)

    Lu, J. Z.; Han, B.; Cui, C. Y.; Li, C. J.; Luo, K. Y.

    2017-02-01

    The effects of massive laser shock peening (LSP) treatment with different coverage layers on residual stress, pitting morphologies in a standard corrosive solution and electrochemical corrosion resistance of AISI 4145 steel were investigated by pitting corrosion test, potentiodynamic polarisation test, and SEM observations. Results showed massive LSP treatment can effectively cause an obvious improvement of pitting corrosion resistance of AISI 4145 steel, and increased coverage layer can also gradually improve its corrosion resistance. Massive LSP treatment with multiple layers was shown to influence pitting corrosion behaviour in a standard corrosive solution.

  14. Studying the Post-Fire Response of Vegetation in California Protected Areas with NDVI-based Pheno-Metrics

    NASA Astrophysics Data System (ADS)

    Jia, S.; Gillespie, T. W.

    2016-12-01

    Post-fire response from vegetation is determined by the intensity and timing of fires as well as the nature of local biomes. Though the field-based studies focusing on selected study sites helped to understand the mechanisms of post-fire response, there is a need to extend the analysis to a broader spatial extent with the assistance of remotely sensed imagery of fires and vegetation. Pheno-metrics, a series of variables on the growing cycle extracted from basic satellite measurements of vegetation coverage, translate the basic remote sensing measurements such as NDVI to the language of phenology and fire ecology in a quantitative form. In this study, we analyzed the rate of biomass removal after ignition and the speed of post-fire recovery in California protected areas from 2000 to 2014 with USGS MTBS fire data and USGS eMODIS pheno-metrics. NDVI drop caused by fire showed the aboveground biomass of evergreen forest was removed much slower than shrubland because of higher moisture level and greater density of fuel. In addition, the above two major land cover types experienced a greatly weakened immediate post-fire growing season, featuring a later start and peak of season, a shorter length of season, and a lower start and peak of NDVI. Such weakening was highly correlated with burn severity, and also influenced by the season of fire and the land cover type, according to our modeling between the anomalies of pheno-metrics and the difference of normalized burn ratio (dNBR). The influence generally decayed over time, but can remain high within the first 5 years after fire, mostly because of the introduction of exotic species when the native species were missing. Local-specific variables are necessary to better address the variance within the same fire and improve the outcomes of models. This study can help ecologists in validating the theories of post-fire vegetation response mechanisms and assist local fire managers in post-fire vegetation recovery.

  15. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.

    2016-02-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent data set for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total data set of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regionally representative locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This data set is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily 8-hour average (MDA8), sum of means over 35 ppb (daily maximum 8-h; SOMO35), accumulated ozone exposure above a threshold of 40 ppbv (AOT40), and metrics related to air quality regulatory thresholds. Gridded data sets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi: 10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  16. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.; Wmo Gaw, Epa Aqs, Epa Castnet, Capmon, Naps, Airbase, Emep, Eanet Ozone Datasets, All Other Contributors To

    2015-07-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent dataset for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total dataset of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regional background locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This dataset is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily eight-hour average (MDA8), SOMO35, AOT40, and metrics related to air quality regulatory thresholds. Gridded datasets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi:10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  17. Power mulchers can apply hardwood bark mulch

    Treesearch

    David M. Emanuel

    1971-01-01

    Two makes of power mulchers were evaluated for their ability to apply raw or processed hardwood bark mulch for use in revegetating disturbed soils. Tests were made to determine the uniformity of bark coverage and distance to which coverage was obtained. Moisture content and particle-size distribution of the barks used were also tested to determine whether or not these...

  18. Quality assessment for color reproduction using a blind metric

    NASA Astrophysics Data System (ADS)

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2007-01-01

    This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.

  19. Evaluation of Dry Electrodes in Canine Heart Rate Monitoring.

    PubMed

    Virtanen, Juhani; Somppi, Sanni; Törnqvist, Heini; Jeyhani, Vala; Fiedler, Patrique; Gizatdinova, Yulia; Majaranta, Päivi; Väätäjä, Heli; Valldeoriola Cardó, Anna; Lekkala, Jukka; Tuukkanen, Sampo; Surakka, Veikko; Vainio, Outi; Vehkaoja, Antti

    2018-05-30

    The functionality of three dry electrocardiogram electrode constructions was evaluated by measuring canine heart rate during four different behaviors: Standing, sitting, lying and walking. The testing was repeated (n = 9) in each of the 36 scenarios with three dogs. Two of the electrodes were constructed with spring-loaded test pins while the third electrode was a molded polymer electrode with Ag/AgCl coating. During the measurement, a specifically designed harness was used to attach the electrodes to the dogs. The performance of the electrodes was evaluated and compared in terms of heartbeat detection coverage. The effect on the respective heart rate coverage was studied by computing the heart rate coverage from the measured electrocardiogram signal using a pattern-matching algorithm to extract the R-peaks and further the beat-to-beat heart rate. The results show that the overall coverage ratios regarding the electrodes varied between 45⁻95% in four different activity modes. The lowest coverage was for lying and walking and the highest was for standing and sitting.

  20. [Estimated mammogram coverage in Goiás State, Brazil].

    PubMed

    Corrêa, Rosangela da Silveira; Freitas-Júnior, Ruffo; Peixoto, João Emílio; Rodrigues, Danielle Cristina Netto; Lemos, Maria Eugênia da Fonseca; Marins, Lucy Aparecida Parreira; Silveira, Erika Aparecida da

    2011-09-01

    This cross-sectional study aimed to estimate mammogram coverage in the State of Goiás, Brazil, describing the supply, demand, and variations in different age groups, evaluating 98 mammography services as observational units. We estimated the mammogram rates by age group and type of health service, as well as the number of tests required to cover 70% and 100% of the target population. We assessed the association between mammograms, geographical distribution of mammography machines, type of service, and age group. Full coverage estimates, considering 100% of women in the 40-69 and 50-69-year age brackets, were 61% and 66%, of which the Brazilian Unified National Health System provided 13% and 14%, respectively. To achieve 70% coverage, 43,424 additional mammograms would be needed. All the associations showed statistically significant differences (p < 0.001). We conclude that mammogram coverage is unevenly distributed in the State of Goiás and that fewer tests are performed than required.

  1. Determination of Anaerobic Threshold by Heart Rate or Heart Rate Variability using Discontinuous Cycle Ergometry.

    PubMed

    Park, Sung Wook; Brenneman, Michael; Cooke, William H; Cordova, Alberto; Fogt, Donovan

    The purpose was to determine if heart rate (HR) and heart rate variability (HRV) responses would reflect anaerobic threshold (AT) using a discontinuous, incremental, cycle test. AT was determined by ventilatory threshold (VT). Cyclists (30.6±5.9y; 7 males, 8 females) completed a discontinuous cycle test consisting of 7 stages (6 min each with 3 min of rest between). Three stages were performed at power outputs (W) below those corresponding to a previously established AT, one at W corresponding to AT, and 3 at W above those corresponding to AT. The W at the intersection of the trend lines was considered each metric's "threshold". The averaged stage data for Ve, HR, and time- and frequency-domain HRV metrics were plotted versus W. The W at the "threshold" for the metrics of interest were compared using correlation analysis and paired-sample t -test. In all, several heart rate-related parameters accurately reflected AT with significant correlations (p≤0.05) were observed between AT W and HR, mean RR interval (MRR), low and high frequency spectral energy (LF and HR, respectively), high frequency peak (fHF), and HFxfHF metrics' threshold W (i.e., MRRTW, etc.). Differences in HR or HRV metric threshold W and AT for all subjects were less than 14 W. The steady state data from discontinuous protocols may allow for a true indication of steady-state physiologic stress responses and corresponding W at AT, compared to continuous protocols using 1-2 min exercise stages.

  2. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  3. Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    2015-05-01

    Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.

  4. Revisiting measurement invariance in intelligence testing in aging research: Evidence for almost complete metric invariance across age groups.

    PubMed

    Sprague, Briana N; Hyun, Jinshil; Molenaar, Peter C M

    2017-01-01

    Invariance of intelligence across age is often assumed but infrequently explicitly tested. Horn and McArdle (1992) tested measurement invariance of intelligence, providing adequate model fit but might not consider all relevant aspects such as sub-test differences. The goal of the current paper is to explore age-related invariance of the WAIS-R using an alternative model that allows direct tests of age on WAIS-R subtests. Cross-sectional data on 940 participants aged 16-75 from the WAIS-R normative values were used. Subtests examined were information, comprehension, similarities, vocabulary, picture completion, block design, picture arrangement, and object assembly. The two intelligence factors considered were fluid and crystallized intelligence. Self-reported ages were divided into young (16-22, n = 300), adult (29-39, n = 275), middle (40-60, n = 205), and older (61-75, n = 160) adult groups. Results suggested partial metric invariance holds. Although most of the subtests reflected fluid and crystalized intelligence similarly across different ages, invariance did not hold for block design on fluid intelligence and picture arrangement on crystallized intelligence for older adults. Additionally, there was evidence of a correlated residual between information and vocabulary for the young adults only. This partial metric invariance model yielded acceptable model fit compared to previously-proposed invariance models of Horn and McArdle (1992). Almost complete metric invariance holds for a two-factor model of intelligence. Most of the subtests were invariant across age groups, suggesting little evidence for age-related bias in the WAIS-R. However, we did find unique relationships between two subtests and intelligence. Future studies should examine age-related differences in subtests when testing measurement invariance in intelligence.

  5. Addressable-Matrix Integrated-Circuit Test Structure

    NASA Technical Reports Server (NTRS)

    Sayah, Hoshyar R.; Buehler, Martin G.

    1991-01-01

    Method of quality control based on use of row- and column-addressable test structure speeds collection of data on widths of resistor lines and coverage of steps in integrated circuits. By use of straightforward mathematical model, line widths and step coverages deduced from measurements of electrical resistances in each of various combinations of lines, steps, and bridges addressable in test structure. Intended for use in evaluating processes and equipment used in manufacture of application-specific integrated circuits.

  6. Longitudinal analysis of change in individual-level needle and syringe coverage amongst a cohort of people who inject drugs in Melbourne, Australia.

    PubMed

    O'Keefe, Daniel; Scott, Nick; Aitken, Campbell; Dietze, Paul

    2017-07-01

    Needle and syringe program (NSP) coverage is often calculated at the individual level. This method relates sterile needle and syringe acquisition to injecting frequency, resulting in a percentage of injecting episodes that utilise a sterile syringe. Most previous research using this method was restricted by their cross-sectional design, calling for longitudinal exploration of coverage. We used the data of 518 participants from an ongoing cohort of people who inject drugs in Melbourne, Australia. We calculated individual-level syringe coverage for the two weeks prior to each interview, then dichotomised the outcome as either "sufficient" (≥100% of injecting episodes covered by at least one reported sterile syringe) or "insufficient" (<100%). Time-variant predictors of change in recent coverage (from sufficient to insufficient coverage) were estimated longitudinally using logistic regression with fixed effects for each participant. Transitioning to methamphetamine injection (AOR:2.16, p=0.004) and a newly positive HCV RNA test result (AOR:4.93, p=0.001) were both associated with increased odds of change to insufficient coverage, whilst change to utilising NSPs as the primary source of syringe acquisition (AOR: 0.41, p=0.003) and opioid substitution therapy (OST) enrolment (AOR:0.51, p=0.013) were protective against a change to insufficient coverage. We statistically tested the transitions between time-variant exposure sub-groups and transitions in individual-level syringe coverage. Our results give important insights into means of improving coverage at the individual level, suggesting that methamphetamine injectors should be targeted, whilst both OST prescription and NSP should be expanded. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Test-Retest Reliability of Graph Metrics in Functional Brain Networks: A Resting-State fNIRS Study

    PubMed Central

    Niu, Haijing; Li, Zhen; Liao, Xuhong; Wang, Jinhui; Zhao, Tengda; Shu, Ni; Zhao, Xiaohu; He, Yong

    2013-01-01

    Recent research has demonstrated the feasibility of combining functional near-infrared spectroscopy (fNIRS) and graph theory approaches to explore the topological attributes of human brain networks. However, the test-retest (TRT) reliability of the application of graph metrics to these networks remains to be elucidated. Here, we used resting-state fNIRS and a graph-theoretical approach to systematically address TRT reliability as it applies to various features of human brain networks, including functional connectivity, global network metrics and regional nodal centrality metrics. Eighteen subjects participated in two resting-state fNIRS scan sessions held ∼20 min apart. Functional brain networks were constructed for each subject by computing temporal correlations on three types of hemoglobin concentration information (HbO, HbR, and HbT). This was followed by a graph-theoretical analysis, and then an intraclass correlation coefficient (ICC) was further applied to quantify the TRT reliability of each network metric. We observed that a large proportion of resting-state functional connections (∼90%) exhibited good reliability (0.6< ICC <0.74). For global and nodal measures, reliability was generally threshold-sensitive and varied among both network metrics and hemoglobin concentration signals. Specifically, the majority of global metrics exhibited fair to excellent reliability, with notably higher ICC values for the clustering coefficient (HbO: 0.76; HbR: 0.78; HbT: 0.53) and global efficiency (HbO: 0.76; HbR: 0.70; HbT: 0.78). Similarly, both nodal degree and efficiency measures also showed fair to excellent reliability across nodes (degree: 0.52∼0.84; efficiency: 0.50∼0.84); reliability was concordant across HbO, HbR and HbT and was significantly higher than that of nodal betweenness (0.28∼0.68). Together, our results suggest that most graph-theoretical network metrics derived from fNIRS are TRT reliable and can be used effectively for brain network research. This study also provides important guidance on the choice of network metrics of interest for future applied research in developmental and clinical neuroscience. PMID:24039763

  8. Updating stand-level forest inventories using airborne laser scanning and Landsat time series data

    NASA Astrophysics Data System (ADS)

    Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping

    2018-04-01

    Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.

  9. Measures for Electronic Resources (E-Metrics). Complete Set.

    ERIC Educational Resources Information Center

    Association of Research Libraries, Washington, DC.

    The Association of Research Libraries (ARL) E-Metrics study was designed as an 18-month project in three phases: an inventory of what libraries were already doing about data collection for electronic resources and an identification of any libraries that could provide best practice; identifying and testing data elements that could be collected and…

  10. Automated Essay Scoring versus Human Scoring: A Comparative Study

    ERIC Educational Resources Information Center

    Wang, Jinhao; Brown, Michelle Stallone

    2007-01-01

    The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…

  11. On the Use of Software Metrics as a Predictor of Software Security Problems

    DTIC Science & Technology

    2013-01-01

    models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the...vulnerabilities reported by testing and those found in the field. Summary of Most Important Results We evaluated our model on three commercial telecommunications

  12. DEVELOPMENT OF A BIRD INTEGRITY INDEX: MEASURING AVIAN RESPONSE TO DISTURBANCE IN THE BLUE MOUNTAINS OF OREGON, USA

    EPA Science Inventory

    The Bird Integrity Index (BII) presented here uses bird assemblage information to assess human impacts to 28 stream reaches in the Blue Mountains of eastern Oregon. Eighty-one candidate metrics were extracted from bird survey data for testing. The metrics represented aspects of ...

  13. Metrics, The Measure of Your Future: Materials Evaluation Forms.

    ERIC Educational Resources Information Center

    Troy, Joan B.

    Three evaluation forms are contained in this publication by the Winston-Salem/Forsyth Metric Education Project to be used in conjunction with their materials. They are: (1) Field-Test Materials Evaluation Form; (2) Student Materials Evaluation Form; and (3) Composite Materials Evaluation Form. The questions in these forms are phrased so they can…

  14. New Objective Refraction Metric Based on Sphere Fitting to the Wavefront

    PubMed Central

    Martínez-Finkelshtein, Andreí

    2017-01-01

    Purpose To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. Methods A sphere is fitted to the ocular wavefront at the center and at a variable distance, t. The optimal fitting distance, topt, is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r0, and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. Results For pupil radii r0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size. PMID:29104804

  15. New Objective Refraction Metric Based on Sphere Fitting to the Wavefront.

    PubMed

    Jaskulski, Mateusz; Martínez-Finkelshtein, Andreí; López-Gil, Norberto

    2017-01-01

    To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. A sphere is fitted to the ocular wavefront at the center and at a variable distance, t . The optimal fitting distance, t opt , is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r 0 , and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. For pupil radii r 0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r 0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size.

  16. Estimating regional wheat yield from the shape of decreasing curves of green area index temporal profiles retrieved from MODIS data

    NASA Astrophysics Data System (ADS)

    Kouadio, Louis; Duveiller, Grégory; Djaby, Bakary; El Jarroudi, Moussa; Defourny, Pierre; Tychon, Bernard

    2012-08-01

    Earth observation data, owing to their synoptic, timely and repetitive coverage, have been recognized as a valuable tool for crop monitoring at different levels. At the field level, the close correlation between green leaf area (GLA) during maturation and grain yield in wheat revealed that the onset and rate of senescence appeared to be important factors for determining wheat grain yield. Our study sought to explore a simple approach for wheat yield forecasting at the regional level, based on metrics derived from the senescence phase of the green area index (GAI) retrieved from remote sensing data. This study took advantage of recent methodological improvements in which imagery with high revisit frequency but coarse spatial resolution can be exploited to derive crop-specific GAI time series by selecting pixels whose ground-projected instantaneous field of view is dominated by the target crop: winter wheat. A logistic function was used to characterize the GAI senescence phase and derive the metrics of this phase. Four regression-based models involving these metrics (i.e., the maximum GAI value, the senescence rate and the thermal time taken to reach 50% of the green surface in the senescent phase) were related to official wheat yield data. The performances of such models at this regional scale showed that final yield could be estimated with an RMSE of 0.57 ton ha-1, representing about 7% as relative RMSE. Such an approach may be considered as a first yield estimate that could be performed in order to provide better integrated yield assessments in operational systems.

  17. Dose-shaping using targeted sparse optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayre, George A.; Ruan, Dan

    2013-07-15

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, themore » authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot}{sup sparse} improves tradeoff between planning goals by 'sacrificing' voxels that have already been violated to improve PTV coverage, PTV homogeneity, and/or OAR-sparing. In doing so, overall plan quality is increased since these large violations only arise if a net reduction in E{sub tot}{sup sparse} occurs as a result. For example, large violations to dose prescription in the PTV in E{sub tot}{sup sparse}-optimized plans will naturally localize to voxels in and around PTV-OAR overlaps where OAR-sparing may be increased without compromising target coverage. The authors compared the results of our method and the corresponding clinical plans using analyses of DVH plots, dose maps, and two quantitative metrics that quantify PTV homogeneity and overdose. These metrics do not penalize underdose since E{sub tot}{sup sparse}-optimized plans were planned such that their target coverage was similar or better than that of the clinical plans. Finally, plan deliverability was assessed with the 2D modulation index.Results: The proposed method was implemented using IBM's CPLEX optimization package (ILOG CPLEX, Sunnyvale, CA) and required 1-4 min to solve with a 12-core Intel i7 processor. In the testing procedure, the authors optimized for several points on the Pareto surface of four 7-field 6MV prostate cases that were optimized for different levels of PTV homogeneity and OAR-sparing. The generated results were compared against each other and the clinical plan by analyzing their DVH plots and dose maps. After developing intuition by planning the four prostate cases, which had relatively few tradeoffs, the authors applied our method to a 7-field 6 MV pancreas case and a 9-field 6MV head-and-neck case to test the potential impact of our method on more challenging cases. The authors found that our formulation: (1) provided excellent flexibility for balancing OAR-sparing with PTV homogeneity; and (2) permitted the dose planner more control over the evolution of the PTV's spatial dose distribution than conventional objective functions. In particular, E{sub tot}{sup sparse}-optimized plans for the pancreas case and head-and-neck case exhibited substantially improved sparing of the spinal cord and parotid glands, respectively, while maintaining or improving sparing for other OARs and markedly improving PTV homogeneity. Plan deliverability for E{sub tot}{sup sparse}-optimized plans was shown to be better than their associated clinical plans, according to the two-dimensional modulation index.Conclusions: These results suggest that our formulation may be used to improve dose-shaping and OAR-sparing for complicated disease sites, such as the pancreas or head and neck. Furthermore, our objective function and constraints are linear and constitute a linear program, which converges to the global minimum quickly, and can be easily implemented in treatment planning software. Thus, the authors expect fast translation of our method to the clinic where it may have a positive impact on plan quality for challenging disease sites.« less

  18. Adapted diffusion processes for effective forging dies

    NASA Astrophysics Data System (ADS)

    Paschke, H.; Nienhaus, A.; Brunotte, K.; Petersen, T.; Siegmund, M.; Lippold, L.; Weber, M.; Mejauschek, M.; Landgraf, P.; Braeuer, G.; Behrens, B.-A.; Lampke, T.

    2018-05-01

    Hot forging is an effective production method producing safety relevant parts with excellent mechanical properties. The economic efficiency directly depends on the occurring wear of the tools, which limits service lifetime. Several approaches of the presenting research group aim at minimizing the wear caused by interacting mechanical and thermal loads by using enhanced nitriding technology. Thus, by modifying the surface zone layer it is possible to create a resistance against thermal softening provoking plastic deformation and pronounced abrasive wear. As a disadvantage, intensely nitrided surfaces may possibly include the risk of increased crack sensitivity and therefore feature the chipping of material at the treated surface. Recent projects (evaluated in several industrial applications) show the high technological potential of adapted treatments: A first approach evaluated localized treatments by preventing areas from nitrogen diffusion with applied pastes or other coverages. Now, further ideas are to use this principle to structure the surface with differently designed patterns generating smaller ductile zones beneath nitrided ones. The selection of suitable designs is subject to certain geo-metrical requirements though. The intention of this approach is to prevent the formation and propagation of cracks under thermal shock conditions. Analytical characterization methods for crack sensitivity of surface zone layers and an accurate system of testing rigs for thermal shock conditions verified the treatment concepts. Additionally, serial forging tests using adapted testing geometries and finally, tests in the industrial production field were performed. Besides stabilizing the service lifetime and decreasing specific wear mechanisms caused by thermal influences, the crack behavior was influenced positively. This leads to a higher efficiency of the industrial production process and enables higher output in forging campaigns of industrial partners.

  19. Use of performance metrics for the measurement of universal coverage for maternal care in Mexico.

    PubMed

    Serván-Mori, Edson; Contreras-Loya, David; Gomez-Dantés, Octavio; Nigenda, Gustavo; Sosa-Rubí, Sandra G; Lozano, Rafael

    2017-06-01

    This study provides evidence for those working in the maternal health metrics and health system performance fields, as well as those interested in achieving universal and effective health care coverage. Based on the perspective of continuity of health care and applying quasi-experimental methods to analyse the cross-sectional 2009 National Demographic Dynamics Survey (n = 14 414 women), we estimated the middle-term effects of Mexico's new public health insurance scheme, Seguro Popular de Salud (SPS) (vs women without health insurance) on seven indicators related to maternal health care (according to official guidelines): (a) access to skilled antenatal care (ANC); (b) timely ANC; (c) frequent ANC; (d) adequate content of ANC; (e) institutional delivery; (f) postnatal consultation and (g) access to standardized comprehensive antenatal and postnatal care (or the intersection of the seven process indicators). Our results show that 94% of all pregnancies were attended by trained health personnel. However, comprehensive access to ANC declines steeply in both groups as we move along the maternal healthcare continuum. The percentage of institutional deliveries providing timely, frequent and adequate content of ANC reached 70% among SPS women (vs 64.7% in the uninsured), and only 57.4% of SPS-affiliated women received standardized comprehensive care (vs 53.7% in the uninsured group). In Mexico, access to comprehensive antenatal and postnatal care as defined by Mexican guidelines (in accordance to WHO recommendations) is far from optimal. Even though a positive influence of SPS on maternal care was documented, important challenges still remain. Our results identified key bottlenecks of the maternal healthcare continuum that should be addressed by policy makers through a combination of supply side interventions and interventions directed to social determinants of access to health care. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. New two-metric theory of gravity with prior geometry

    NASA Technical Reports Server (NTRS)

    Lightman, A. P.; Lee, D. L.

    1973-01-01

    A Lagrangian-based metric theory of gravity is developed with three adjustable constants and two tensor fields, one of which is a nondynamic 'flat space metric' eta. With a suitable cosmological model and a particular choice of the constants, the 'post-Newtonian limit' of the theory agrees, in the current epoch, with that of general relativity theory (GRT); consequently the theory is consistent with current gravitation experiments. Because of the role of eta, the gravitational 'constant' G is time-dependent and gravitational waves travel null geodesics of eta rather than the physical metric g. Gravitational waves possess six degrees of freedom. The general exact static spherically-symmetric solution is a four-parameter family. Future experimental tests of the theory are discussed.

  1. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.

  2. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    PubMed

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  3. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    PubMed Central

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  4. Hazardous to Your Health: Magazine Coverage of the Saccharin Debate.

    ERIC Educational Resources Information Center

    Haugh, Rita E.

    After the Food and Drug Administration announced the results of testing of saccharin as a possible carcinogen and ruled that it should be banned, a public outcry brought about a delay in the ban. A study of magazine coverage of the reasons for the ban and information about the testing showed that in eleven mass circulation magazines, the reporting…

  5. A Randomized Comparative Study of Two Techniques to Optimize the Root Coverage Using a Porcine Collagen Matrix.

    PubMed

    Reino, Danilo Maeda; Maia, Luciana Prado; Fernandes, Patrícia Garani; Souza, Sergio Luis Scombatti de; Taba Junior, Mario; Palioto, Daniela Bazan; Grisi, Marcio Fermandes de Moraes; Novaes, Arthur Belém

    2015-10-01

    The aim of this randomized controlled clinical study was to compare the extended flap technique (EFT) with the coronally advanced flap technique (CAF) using a porcine collagen matrix (PCM) for root coverage. Twenty patients with two bilateral gingival recessions, Miller class I or II on non-molar teeth were treated with CAF+PCM (control group) or EFT+PCM (test group). Clinical measurements of probing pocket depth (PPD), clinical attachment level (CAL), recession height (RH), keratinized tissue height (KTH), keratinized mucosa thickness (KMT) were determined at baseline, 3 and 6 months post-surgery. At 6 months, the mean root coverage for test group was 81.89%, and for control group it was 62.80% (p<0.01). The change of recession depth from baseline was statistically significant between test and control groups, with an mean of 2.21 mm gained at the control sites and 2.84 mm gained at the test sites (p=0.02). There were no statistically significant differences for KTH, PPD or CAL comparing the two therapies. The extended flap technique presented better root coverage than the coronally advanced flap technique when PCM was used.

  6. f(T) gravity and energy distribution in Landau-Lifshitz prescription

    NASA Astrophysics Data System (ADS)

    Ganiou, M. G.; Houndjo, M. J. S.; Tossa, J.

    We investigate in this paper the Landau-Lifshitz energy distribution in the framework of f(T) theory view as a modified version of Teleparallel theory. From some important Teleparallel theory results on the localization of energy, our investigations generalize the Landau-Lifshitz prescription from the computation of the energy-momentum complex to the framework of f(T) gravity as it is done in the modified versions of General Relativity. We compute the energy density in the first step for three plane-symmetric metrics in vacuum. We find for the second metric that the energy density vanishes independently of f(T) models. We find that the Teleparallel Landau-Lifshitz energy-momentum complex formulations for these metrics are different from those obtained in General Relativity for the same metrics. Second, the calculations are performed for the cosmic string spacetime metric. It results that the energy distribution depends on the mass M and the radius r of cosmic string and it is strongly affected by the parameter of the considered quadratic and cubic f(T) models. Our investigation with this metric induces interesting results susceptible to be tested with some astrophysics hypothesis.

  7. Volumetric-modulated arc therapy for the treatment of a large planning target volume in thoracic esophageal cancer.

    PubMed

    Abbas, Ahmar S; Moseley, Douglas; Kassam, Zahra; Kim, Sun Mo; Cho, Charles

    2013-05-06

    Recently, volumetric-modulated arc therapy (VMAT) has demonstrated the ability to deliver radiation dose precisely and accurately with a shorter delivery time compared to conventional intensity-modulated fixed-field treatment (IMRT). We applied the hypothesis of VMAT technique for the treatment of thoracic esophageal carcinoma to determine superior or equivalent conformal dose coverage for a large thoracic esophageal planning target volume (PTV) with superior or equivalent sparing of organs-at-risk (OARs) doses, and reduce delivery time and monitor units (MUs), in comparison with conventional fixed-field IMRT plans. We also analyzed and compared some other important metrics of treatment planning and treatment delivery for both IMRT and VMAT techniques. These metrics include: 1) the integral dose and the volume receiving intermediate dose levels between IMRT and VMATI plans; 2) the use of 4D CT to determine the internal motion margin; and 3) evaluating the dosimetry of every plan through patient-specific QA. These factors may impact the overall treatment plan quality and outcomes from the individual planning technique used. In this study, we also examined the significance of using two arcs vs. a single-arc VMAT technique for PTV coverage, OARs doses, monitor units and delivery time. Thirteen patients, stage T2-T3 N0-N1 (TNM AJCC 7th edn.), PTV volume median 395 cc (range 281-601 cc), median age 69 years (range 53 to 85), were treated from July 2010 to June 2011 with a four-field (n = 4) or five-field (n = 9) step-and-shoot IMRT technique using a 6 MV beam to a prescribed dose of 50 Gy in 20 to 25 F. These patients were retrospectively replanned using single arc (VMATI, 91 control points) and two arcs (VMATII, 182 control points). All treatment plans of the 13 study cases were evaluated using various dose-volume metrics. These included PTV D99, PTV D95, PTV V9547.5Gy(95%), PTV mean dose, Dmax, PTV dose conformity (Van't Riet conformation number (CN)), mean lung dose, lung V20 and V5, liver V30, and Dmax to the spinal canal prv3mm. Also examined were the total plan monitor units (MUs) and the beam delivery time. Equivalent target coverage was observed with both VMAT single and two-arc plans. The comparison of VMATI with fixed-field IMRT demonstrated equivalent target coverage; statistically no significant difference were found in PTV D99 (p = 0.47), PTV mean (p = 0.12), PTV D95 and PTV V9547.5Gy (95%) (p = 0.38). However, Dmax in VMATI plans was significantly lower compared to IMRT (p = 0.02). The Van't Riet dose conformation number (CN) was also statistically in favor of VMATI plans (p = 0.04). VMATI achieved lower lung V20 (p = 0.05), whereas lung V5 (p = 0.35) and mean lung dose (p = 0.62) were not significantly different. The other OARs, including spinal canal, liver, heart, and kidneys showed no statistically significant differences between the two techniques. Treatment time delivery for VMATI plans was reduced by up to 55% (p = 5.8E-10) and MUs reduced by up to 16% (p = 0.001). Integral dose was not statistically different between the two planning techniques (p = 0.99). There were no statistically significant differences found in dose distribution of the two VMAT techniques (VMATI vs. VMATII) Dose statistics for both VMAT techniques were: PTV D99 (p = 0.76), PTV D95 (p = 0.95), mean PTV dose (p = 0.78), conformation number (CN) (p = 0.26), and MUs (p = 0.1). However, the treatment delivery time for VMATII increased significantly by two-fold (p = 3.0E-11) compared to VMATI. VMAT-based treatment planning is safe and deliverable for patients with thoracic esophageal cancer with similar planning goals, when compared to standard IMRT. The key benefit for VMATI was the reduction in treatment delivery time and MUs, and improvement in dose conformality. In our study, we found no significant difference in VMATII over single-arc VMATI for PTV coverage or OARs doses. However, we observed significant increase in delivery time for VMATII compared to VMATI.

  8. Trends of improved water and sanitation coverage around the globe between 1990 and 2010: inequality among countries and performance of official development assistance

    PubMed Central

    Cha, Seungman; Mankadi, Paul Mansiangi; Elhag, Mousab Siddig; Lee, Yongjoo; Jin, Yan

    2017-01-01

    ABSTRACT Background: As the Millennium Development Goals ended, and were replaced by the Sustainable Development Goals, efforts have been made to evaluate the achievements and performance of official development assistance (ODA) in the health sector. In this study, we explore trends in the expansion of water and sanitation coverage in developing countries and the performance of ODA. Design: We explored inequality across developing countries by income level, and investigated how ODA for water and sanitation was committed by country, region, and income level. Changes in inequality were tested via slope changes by investigating the interaction of year and income level with a likelihood ratio test. A random effects model was applied according to the results of the Hausman test. Results: The slope of the linear trend between economic level and sanitation coverage has declined over time. However, a random effects model suggested that the change in slope across years was not significant (e.g. for the slope change between 2000 and 2010: likelihood ratio χ2 = 2.49, probability > χ2 = 0.1146). A similar pro-rich pattern across developing countries and a non-significant change in the slope associated with different economic levels were demonstrated for water coverage. Our analysis shows that the inequality of water and sanitation coverage among countries across the world has not been addressed effectively during the past decade. Our findings demonstrate that the countries with the least coverage persistently received far less ODA per capita than did countries with much more extensive water and sanitation coverage, suggesting that ODA for water and sanitation is poorly targeted. Conclusion: The most deprived countries should receive more attention for water and sanitation improvements from the world health community. A strong political commitment to ODA targeting the countries with the least coverage is needed at the global level. PMID:28604256

  9. Estimation of the cost-effectiveness of HIV prevention portfolios for people who inject drugs in the United States: A model-based analysis

    PubMed Central

    Owens, Douglas K.; Goldhaber-Fiebert, Jeremy D.; Brandeau, Margaret L.

    2017-01-01

    Background The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID) a public health priority. Some of these programs have benefits beyond prevention of HIV—a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT), needle and syringe programs (NSPs), HIV testing and treatment (Test & Treat), and oral HIV pre-exposure prophylaxis (PrEP). Methods and findings We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars), health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs]), and incremental cost-effectiveness ratios (ICERs) of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively) and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000) infections and cost US$18,000 (95% CI: US$14,000, US$24,000) per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000) infections and cost US$25,000 (95% CI: US$7,000, US$76,000) per QALY gained, 50% Test & Treat coverage could avert up to 6,700 (95% CI: 1,200, 16,000) infections and cost US$27,000 (95% CI: US$15,000, US$48,000) per QALY gained, and 50% PrEP coverage could avert up to 37,000 (22,000, 58,000) infections and cost US$300,000 (95% CI: US$162,000, US$667,000) per QALY gained. When coverage expansions are allowed to include combined investment with other programs and are compared to the next best intervention, the model projects that scaling OAT coverage up to 50%, then scaling NSP coverage to 50%, then scaling Test & Treat coverage to 50% can be cost-effective, with each coverage expansion having the potential to cost less than US$50,000 per QALY gained relative to the next best portfolio. In probabilistic sensitivity analyses, 59% of portfolios prioritized the addition of OAT and 41% prioritized the addition of NSPs, while PrEP was not likely to be a priority nor a cost-effective addition. Our findings are intended to be illustrative, as data on achievable coverage are limited and, in practice, the expansion scenarios considered may exceed feasible levels. We assumed independence of interventions and constant returns to scale. Extensive sensitivity analyses allowed us to assess parameter sensitivity, but the use of a dynamic compartmental model limited the exploration of structural sensitivities. Conclusions We estimate that OAT, NSPs, and Test & Treat, implemented singly or in combination, have the potential to effectively and cost-effectively prevent HIV in US PWID. PrEP is not likely to be cost-effective in this population, based on the scenarios we evaluated. While local budgets or policy may constrain feasible coverage levels for the various interventions, our findings suggest that investments in combined prevention programs can substantially reduce HIV transmission and improve health outcomes among PWID. PMID:28542184

  10. Estimation of the cost-effectiveness of HIV prevention portfolios for people who inject drugs in the United States: A model-based analysis.

    PubMed

    Bernard, Cora L; Owens, Douglas K; Goldhaber-Fiebert, Jeremy D; Brandeau, Margaret L

    2017-05-01

    The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID) a public health priority. Some of these programs have benefits beyond prevention of HIV-a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT), needle and syringe programs (NSPs), HIV testing and treatment (Test & Treat), and oral HIV pre-exposure prophylaxis (PrEP). We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars), health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs]), and incremental cost-effectiveness ratios (ICERs) of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively) and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000) infections and cost US$18,000 (95% CI: US$14,000, US$24,000) per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000) infections and cost US$25,000 (95% CI: US$7,000, US$76,000) per QALY gained, 50% Test & Treat coverage could avert up to 6,700 (95% CI: 1,200, 16,000) infections and cost US$27,000 (95% CI: US$15,000, US$48,000) per QALY gained, and 50% PrEP coverage could avert up to 37,000 (22,000, 58,000) infections and cost US$300,000 (95% CI: US$162,000, US$667,000) per QALY gained. When coverage expansions are allowed to include combined investment with other programs and are compared to the next best intervention, the model projects that scaling OAT coverage up to 50%, then scaling NSP coverage to 50%, then scaling Test & Treat coverage to 50% can be cost-effective, with each coverage expansion having the potential to cost less than US$50,000 per QALY gained relative to the next best portfolio. In probabilistic sensitivity analyses, 59% of portfolios prioritized the addition of OAT and 41% prioritized the addition of NSPs, while PrEP was not likely to be a priority nor a cost-effective addition. Our findings are intended to be illustrative, as data on achievable coverage are limited and, in practice, the expansion scenarios considered may exceed feasible levels. We assumed independence of interventions and constant returns to scale. Extensive sensitivity analyses allowed us to assess parameter sensitivity, but the use of a dynamic compartmental model limited the exploration of structural sensitivities. We estimate that OAT, NSPs, and Test & Treat, implemented singly or in combination, have the potential to effectively and cost-effectively prevent HIV in US PWID. PrEP is not likely to be cost-effective in this population, based on the scenarios we evaluated. While local budgets or policy may constrain feasible coverage levels for the various interventions, our findings suggest that investments in combined prevention programs can substantially reduce HIV transmission and improve health outcomes among PWID.

  11. Making the Case for Objective Performance Metrics in Newborn Screening by Tandem Mass Spectrometry

    ERIC Educational Resources Information Center

    Rinaldo, Piero; Zafari, Saba; Tortorelli, Silvia; Matern, Dietrich

    2006-01-01

    The expansion of newborn screening programs to include multiplex testing by tandem mass spectrometry requires understanding and close monitoring of performance metrics. This is not done consistently because of lack of defined targets, and interlaboratory comparison is almost nonexistent. Between July 2004 and April 2006 (N = 176,185 cases), the…

  12. Formant Centralization Ratio: A Proposal for a New Acoustic Measure of Dysarthric Speech

    ERIC Educational Resources Information Center

    Sapir, Shimon; Ramig, Lorraine O.; Spielman, Jennifer L.; Fox, Cynthia

    2010-01-01

    Purpose: The vowel space area (VSA) has been used as an acoustic metric of dysarthric speech, but with varying degrees of success. In this study, the authors aimed to test an alternative metric to the VSA--the "formant centralization ratio" (FCR), which is hypothesized to more effectively differentiate dysarthric from healthy speech and register…

  13. Comparison of Collection Methods for Fecal Samples in Microbiome Studies

    PubMed Central

    Vogtmann, Emily; Chen, Jun; Amir, Amnon; Shi, Jianxin; Abnet, Christian C.; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Sinha, Rashmi

    2017-01-01

    Prospective cohort studies are needed to assess the relationship between the fecal microbiome and human health and disease. To evaluate fecal collection methods, we determined technical reproducibility, stability at ambient temperature, and accuracy of 5 fecal collection methods (no additive, 95% ethanol, RNAlater Stabilization Solution, fecal occult blood test cards, and fecal immunochemical test tubes). Fifty-two healthy volunteers provided fecal samples at the Mayo Clinic in Rochester, Minnesota, in 2014. One set from each sample collection method was frozen immediately, and a second set was incubated at room temperature for 96 hours and then frozen. Intraclass correlation coefficients (ICCs) were calculated for the relative abundance of 3 phyla, 2 alpha diversity metrics, and 4 beta diversity metrics. Technical reproducibility was high, with ICCs for duplicate fecal samples between 0.64 and 1.00. Stability for most methods was generally high, although the ICCs were below 0.60 for 95% ethanol in metrics that were more sensitive to relative abundance. When compared with fecal samples that were frozen immediately, the ICCs were below 0.60 for the metrics that were sensitive to relative abundance; however, the remaining 2 alpha diversity and 3 beta diversity metrics were all relatively accurate, with ICCs above 0.60. In conclusion, all fecal sample collection methods appear relatively reproducible, stable, and accurate. Future studies could use these collection methods for microbiome analyses. PMID:27986704

  14. Palatini formulation of f( R, T) gravity theory, and its cosmological implications

    NASA Astrophysics Data System (ADS)

    Wu, Jimin; Li, Guangjie; Harko, Tiberiu; Liang, Shi-Dong

    2018-05-01

    We consider the Palatini formulation of f( R, T) gravity theory, in which a non-minimal coupling between the Ricci scalar and the trace of the energy-momentum tensor is introduced, by considering the metric and the affine connection as independent field variables. The field equations and the equations of motion for massive test particles are derived, and we show that the independent connection can be expressed as the Levi-Civita connection of an auxiliary, energy-momentum trace dependent metric, related to the physical metric by a conformal transformation. Similar to the metric case, the field equations impose the non-conservation of the energy-momentum tensor. We obtain the explicit form of the equations of motion for massive test particles in the case of a perfect fluid, and the expression of the extra force, which is identical to the one obtained in the metric case. The thermodynamic interpretation of the theory is also briefly discussed. We investigate in detail the cosmological implications of the theory, and we obtain the generalized Friedmann equations of the f( R, T) gravity in the Palatini formulation. Cosmological models with Lagrangians of the type f=R-α ^2/R+g(T) and f=R+α ^2R^2+g(T) are investigated. These models lead to evolution equations whose solutions describe accelerating Universes at late times.

  15. Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions

    PubMed Central

    Pavisic, Ivanna M.; Firth, Nicholas C.; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J.; Yong, Keir X. X.; Slattery, Catherine F.; Paterson, Ross W.; Foulkes, Alexander J. M.; Macpherson, Kirsty; Carton, Amelia M.; Alexander, Daniel C.; Shawe-Taylor, John; Fox, Nick C.; Schott, Jonathan M.; Crutch, Sebastian J.; Primativo, Silvia

    2017-01-01

    Young onset Alzheimer’s disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials. PMID:28824534

  16. Eyetracking Metrics in Young Onset Alzheimer's Disease: A Window into Cognitive Visual Functions.

    PubMed

    Pavisic, Ivanna M; Firth, Nicholas C; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J; Yong, Keir X X; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Macpherson, Kirsty; Carton, Amelia M; Alexander, Daniel C; Shawe-Taylor, John; Fox, Nick C; Schott, Jonathan M; Crutch, Sebastian J; Primativo, Silvia

    2017-01-01

    Young onset Alzheimer's disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD ( n  = 26 typical AD; n  = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.

  17. Semantic Pattern Analysis for Verbal Fluency Based Assessment of Neurological Disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Ainsworth, Keela C; Brown, Tyler C

    In this paper, we present preliminary results of semantic pattern analysis of verbal fluency tests used for assessing cognitive psychological and neuropsychological disorders. We posit that recent advances in semantic reasoning and artificial intelligence can be combined to create a standardized computer-aided diagnosis tool to automatically evaluate and interpret verbal fluency tests. Towards that goal, we derive novel semantic similarity (phonetic, phonemic and conceptual) metrics and present the predictive capability of these metrics on a de-identified dataset of participants with and without neurological disorders.

  18. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  19. Validation of the 5th and 95th Percentile Hybrid III Anthropomorphic Test Device Finite Element Model

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Somers, J. T.; Baldwin, M. A.; Wells, J. A.; Newby, N.; Currie, N. J.

    2014-01-01

    NASA spacecraft design requirements for occupant protection are a combination of the Brinkley criteria and injury metrics extracted from anthropomorphic test devices (ATD's). For the ATD injury metrics, the requirements specify the use of the 5th percentile female Hybrid III and the 95th percentile male Hybrid III. Furthermore, each of these ATD's is required to be fitted with an articulating pelvis and a straight spine. The articulating pelvis is necessary for the ATD to fit into spacecraft seats, while the straight spine is required as injury metrics for vertical accelerations are better defined for this configuration. The requirements require that physical testing be performed with both ATD's to demonstrate compliance. Before compliance testing can be conducted, extensive modeling and simulation are required to determine appropriate test conditions, simulate conditions not feasible for testing, and assess design features to better ensure compliance testing is successful. While finite element (FE) models are currently available for many of the physical ATD's, currently there are no complete models for either the 5th percentile female or the 95th percentile male Hybrid III with a straight spine and articulating pelvis. The purpose of this work is to assess the accuracy of the existing Livermore Software Technology Corporation's FE models of the 5th and 95th percentile ATD's. To perform this assessment, a series of tests will be performed at Wright Patterson Air Force Research Lab using their horizontal impact accelerator sled test facility. The ATD's will be placed in the Orion seat with a modified-advanced-crew-escape-system (MACES) pressure suit and helmet, and driven with loadings similar to what is expected for the actual Orion vehicle during landing, launch abort, and chute deployment. Test data will be compared to analytical predictions and modelling uncertainty factors will be determined for each injury metric. Additionally, the test data will be used to further improve the FE model, particularly in the areas of the ATD neck components, harness, and suit and helmet effects.

  20. One network metric datastore to track them all: the OSG network metric service

    NASA Astrophysics Data System (ADS)

    Quick, Robert; Babik, Marian; Fajardo, Edgar M.; Gross, Kyle; Hayashi, Soichi; Krenz, Marina; Lee, Thomas; McKee, Shawn; Pipes, Christopher; Teige, Scott

    2017-10-01

    The Open Science Grid (OSG) relies upon the network as a critical part of the distributed infrastructures it enables. In 2012, OSG added a new focus area in networking with a goal of becoming the primary source of network information for its members and collaborators. This includes gathering, organizing, and providing network metrics to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, and traffic routing. In September of 2015, this service was deployed into the OSG production environment. We will report on the creation, implementation, testing, and deployment of the OSG Networking Service. Starting from organizing the deployment of perfSONAR toolkits within OSG and its partners, to the challenges of orchestrating regular testing between sites, to reliably gathering the resulting network metrics and making them available for users, virtual organizations, and higher level services, all aspects of implementation will be reviewed. In particular, several higher-level services were developed to bring the OSG network service to its full potential. These include a web-based mesh configuration system, which allows central scheduling and management of all the network tests performed by the instances; a set of probes to continually gather metrics from the remote instances and publish it to different sources; a central network datastore (esmond), which provides interfaces to access the network monitoring information in close to real time and historically (up to a year) giving the state of the tests; and a perfSONAR infrastructure monitor system, ensuring the current perfSONAR instances are correctly configured and operating as intended. We will also describe the challenges we encountered in ongoing operations of the network service and how we have evolved our procedures to address those challenges. Finally we will describe our plans for future extensions and improvements to the service.

  1. saltPAD: A New Analytical Tool for Monitoring Salt Iodization in Low Resource Settings

    PubMed Central

    Myers, Nicholas M.; Strydom, Emmerentia Elza; Sweet, James; Sweet, Christopher; Spohrer, Rebecca; Dhansay, Muhammad Ali; Lieberman, Marya

    2016-01-01

    We created a paper test card that measures a common iodizing agent, iodate, in salt. To test the analytical metrics, usability, and robustness of the paper test card when it is used in low resource settings, the South African Medical Research Council and GroundWork performed independent validation studies of the device. The accuracy and precision metrics from both studies were comparable. In the SAMRC study, more than 90% of the test results (n=1704) were correctly classified as corresponding to adequately or inadequately iodized salt. The cards are suitable for market and household surveys to determine whether salt is adequately iodized. Further development of the cards will improve their utility for monitoring salt iodization during production. PMID:29942380

  2. Information filtering on coupled social networks.

    PubMed

    Nie, Da-Cheng; Zhang, Zi-Ke; Zhou, Jun-Lin; Fu, Yan; Zhang, Kui

    2014-01-01

    In this paper, based on the coupled social networks (CSN), we propose a hybrid algorithm to nonlinearly integrate both social and behavior information of online users. Filtering algorithm, based on the coupled social networks, considers the effects of both social similarity and personalized preference. Experimental results based on two real datasets, Epinions and Friendfeed, show that the hybrid pattern can not only provide more accurate recommendations, but also enlarge the recommendation coverage while adopting global metric. Further empirical analyses demonstrate that the mutual reinforcement and rich-club phenomenon can also be found in coupled social networks where the identical individuals occupy the core position of the online system. This work may shed some light on the in-depth understanding of the structure and function of coupled social networks.

  3. An index of biological integrity (IBI) for Pacific Northwest rivers

    USGS Publications Warehouse

    Mebane, C.A.; Maret, T.R.; Hughes, R.M.

    2003-01-01

    The index of biotic integrity (IBI) is a commonly used measure of relative aquatic ecosystem condition; however, its application to coldwater rivers over large geographic areas has been limited. A seven-step process was used to construct and test an IBI applicable to fish assemblages in coldwater rivers throughout the U.S. portion of the Pacific Northwest. First, fish data from the region were compiled from previous studies and candidate metrics were selected. Second, reference conditions were estimated from historical reports and minimally disturbed reference sites in the region. Third, data from the upper Snake River basin were used to test metrics and develop the initial index. Fourth, candidate metrics were evaluated for their redundancy, variability, precision, and ability to reflect a wide range of conditions while distinguishing reference sites from disturbed sites. Fifth, the selected metrics were standardized by being scored continuously from 0 to 1 and then weighted as necessary to produce an IBI ranging from 0 to 100. The resulting index included 10 metrics: number of native coldwater species, number of age-classes of sculpins Cottus spp., percentage of sensitive native individuals, percentage of coldwater individuals, percentage of tolerant individuals, number of alien species, percentage of common carp Cyprinus carpio individuals, number of selected salmonid age-classes, catch per unit effort of coldwater individuals, and percentage of individuals with selected anomalies. Sixth, the IBI responses were tested with additional data sets from throughout the Pacific Northwest. Last, scores from two minimally disturbed reference rivers were evaluated for longitudinal gradients along the river continuum. The IBI responded to environmental disturbances and was spatially and temporally stable at over 150 sites in the Pacific Northwest. The results support its use across a large geographic area to describe the relative biological condition of coolwater and coldwater rivers with low species richness.

  4. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  5. An empirical comparison of a dynamic software testability metric to static cyclomatic complexity

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.

    1993-01-01

    This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.

  6. Exact Harmonic Metric for a Uniformly Moving Schwarzschild Black Hole

    NASA Astrophysics Data System (ADS)

    He, Guan-Sheng; Lin, Wen-Bin

    2014-02-01

    The harmonic metric for Schwarzschild black hole with a uniform velocity is presented. In the limit of weak field and low velocity, this metric reduces to the post-Newtonian approximation for one moving point mass. As an application, we derive the dynamics of particle and photon in the weak-field limit for the moving Schwarzschild black hole with an arbitrary velocity. It is found that the relativistic motion of gravitational source can induce an additional centripetal force on the test particle, which may be comparable to or even larger than the conventional Newtonian gravitational force.

  7. Development of invertebrate community indexes of stream quality for the islands of Maui and Oahu, Hawaii

    USGS Publications Warehouse

    Wolff, Reuben H.

    2012-01-01

    In 2009-10 the U.S. Geological Survey (USGS) collected physical habitat information and benthic macroinvertebrates at 40 wadeable sites on 25 perennial streams on the Island of Maui, Hawaiʻi, to evaluate the relations between the macroinvertebrate assemblages and environmental characteristics and to develop a multimetric invertebrate community index (ICI) that could be used as an indicator of stream quality. The macroinvertebrate community data were used to identify metrics that could best differentiate among sites according to disturbance gradients such as embeddedness, percent fines (silt and sand areal coverage), or percent agricultural land in the contributing basin area. Environmental assessments were conducted using land-use/land-cover data and reach-level physical habitat data. The Maui data were first evaluated using the previously developed Preliminary-Hawaiian Benthic Index of Biotic Integrity (P-HBIBI) to determine if existing metrics would successfully differentiate stream quality among the sites. Secondly, a number of candidate invertebrate metrics were screened and tested and the individual metrics that proved the best at discerning among the sites along one or more disturbance gradients were combined into a multimetric invertebrate community index (ICI) of stream quality. These metrics were: total invertebrate abundance, Class Insecta relative abundance, the ratio of Trichoptera abundance to nonnative Diptera abundance, native snail (hihiwai) presence or absence, native mountain shrimp (′δpae) presence or absence, native torrent midge (Telmatogeton spp.) presence or absence, and native Megalagrion damselfly presence or absence. The Maui ICI classified 15 of the 40 sites (37.5 percent) as having "good" quality communities, 17 of the sites (42.5 percent) as having "fair" quality communities, and 8 sites (20 percent) as having "poor" quality communities, a classification that may be used to initiate further investigation into the causes of the poor rating. Additionally, quantitative macroinvertebrate samples collected from 31 randomly selected sites on Oʻahu in 2006-07 as part of the U.S. Environmental Protection Agency's Wadeable Stream Assessment (WSA) were used to refine and develop an ICI of stream quality for Oʻahu. The set of metrics that were included in the revised index were: total invertebrate abundance, Class Insecta relative abundance, the ratio of Trichoptera abundance to nonnative Diptera abundance, turbellarian relative abundance, amphipod relative abundance, nonnative mollusk abundance, and nonnative crayfish (Procambarus clarkii) and/or red cherry shrimp (Neocaridina denticulata sinensis) presence or absence. The Oʻahu ICI classified 10 of the 31 sites (32.3 percent) as "good" quality communities, 16 of the sites (51.6 percent) as "fair" quality communities, and 5 of the sites (16.1 percent) as "poor" quality communities. A reanalysis of 18 of the Oʻahu macroinvertebrate sites used to develop the P-HBIBI resulted in the reclassification of 3 samples. The beginning of a statewide ICI was developed on the basis of a combination of metrics from the Maui and Oʻahu ICIs. This combined ICI is intended to help identify broad problem areas so that the Hawaii State Department of Health (HIDOH) can prioritize their efforts on a statewide scale. Once these problem areas are identified, the island-wide ICIs can be used to more accurately assess the quality of individual stream reaches so that the HIDOH can prioritize their efforts on the most impaired streams. By using the combined ICI, 70 percent of the Maui sites and 10 percent of the Oʻahu WSA sites were designated as "good" quality sites; 25 percent of the Maui sites and 45 percent of the Oʻahu WSA sites were designated as "fair" quality sites; and 5 percent of the Maui sites and 45 percent of the Oʻahu WSA sites were designated as "poor" quality sites.

  8. Dynamic allocation of attention to metrical and grouping accents in rhythmic sequences.

    PubMed

    Kung, Shu-Jen; Tzeng, Ovid J L; Hung, Daisy L; Wu, Denise H

    2011-04-01

    Most people find it easy to perform rhythmic movements in synchrony with music, which reflects their ability to perceive the temporal periodicity and to allocate attention in time accordingly. Musicians and non-musicians were tested in a click localization paradigm in order to investigate how grouping and metrical accents in metrical rhythms influence attention allocation, and to reveal the effect of musical expertise on such processing. We performed two experiments in which the participants were required to listen to isochronous metrical rhythms containing superimposed clicks and then to localize the click on graphical and ruler-like representations with and without grouping structure information, respectively. Both experiments revealed metrical and grouping influences on click localization. Musical expertise improved the precision of click localization, especially when the click coincided with a metrically strong beat. Critically, although all participants located the click accurately at the beginning of an intensity group, only musicians located it precisely when it coincided with a strong beat at the end of the group. Removal of the visual cue of grouping structures enhanced these effects in musicians and reduced them in non-musicians. These results indicate that musical expertise not only enhances attention to metrical accents but also heightens sensitivity to perceptual grouping.

  9. Newton gauge cosmological perturbations for static spherically symmetric modifications of the de Sitter metric

    NASA Astrophysics Data System (ADS)

    Santa Vélez, Camilo; Enea Romano, Antonio

    2018-05-01

    Static coordinates can be convenient to solve the vacuum Einstein's equations in presence of spherical symmetry, but for cosmological applications comoving coordinates are more suitable to describe an expanding Universe, especially in the framework of cosmological perturbation theory (CPT). Using CPT we develop a method to transform static spherically symmetric (SSS) modifications of the de Sitter solution from static coordinates to the Newton gauge. We test the method with the Schwarzschild de Sitter (SDS) metric and then derive general expressions for the Bardeen's potentials for a class of SSS metrics obtained by adding to the de Sitter metric a term linear in the mass and proportional to a general function of the radius. Using the gauge invariance of the Bardeen's potentials we then obtain a gauge invariant definition of the turn around radius. We apply the method to an SSS solution of the Brans-Dicke theory, confirming the results obtained independently by solving the perturbation equations in the Newton gauge. The Bardeen's potentials are then derived for new SSS metrics involving logarithmic, power law and exponential modifications of the de Sitter metric. We also apply the method to SSS metrics which give flat rotation curves, computing the radial energy density profile in comoving coordinates in presence of a cosmological constant.

  10. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  11. Identifying and mitigating batch effects in whole genome sequencing data.

    PubMed

    Tom, Jennifer A; Reeder, Jens; Forrest, William F; Graham, Robert R; Hunkapiller, Julie; Behrens, Timothy W; Bhangale, Tushar R

    2017-07-24

    Large sample sets of whole genome sequencing with deep coverage are being generated, however assembling datasets from different sources inevitably introduces batch effects. These batch effects are not well understood and can be due to changes in the sequencing protocol or bioinformatics tools used to process the data. No systematic algorithms or heuristics exist to detect and filter batch effects or remove associations impacted by batch effects in whole genome sequencing data. We describe key quality metrics, provide a freely available software package to compute them, and demonstrate that identification of batch effects is aided by principal components analysis of these metrics. To mitigate batch effects, we developed new site-specific filters that identified and removed variants that falsely associated with the phenotype due to batch effect. These include filtering based on: a haplotype based genotype correction, a differential genotype quality test, and removing sites with missing genotype rate greater than 30% after setting genotypes with quality scores less than 20 to missing. This method removed 96.1% of unconfirmed genome-wide significant SNP associations and 97.6% of unconfirmed genome-wide significant indel associations. We performed analyses to demonstrate that: 1) These filters impacted variants known to be disease associated as 2 out of 16 confirmed associations in an AMD candidate SNP analysis were filtered, representing a reduction in power of 12.5%, 2) In the absence of batch effects, these filters removed only a small proportion of variants across the genome (type I error rate of 3%), and 3) in an independent dataset, the method removed 90.2% of unconfirmed genome-wide SNP associations and 89.8% of unconfirmed genome-wide indel associations. Researchers currently do not have effective tools to identify and mitigate batch effects in whole genome sequencing data. We developed and validated methods and filters to address this deficiency.

  12. The Neandertal vertebral column 1: the cervical spine.

    PubMed

    Gómez-Olivencia, Asier; Been, Ella; Arsuaga, Juan Luis; Stock, Jay T

    2013-06-01

    This paper provides a metric analysis of the Neandertal cervical spine in relation to modern human variation. All seven cervical vertebrae have been analysed. Metric data from eight Neandertal individuals are compared with a large sample of modern humans. The significance of morphometric differences is tested using both z-scores and two-tailed Wilcoxon signed rank tests. The results identify significant metric and morphological differences between Neandertals and modern humans in all seven cervical vertebrae. Neandertal vertebrae are mediolaterally wider and dorsoventrally longer than modern humans, due in part to longer and more horizontally oriented spinous processes. This suggests that Neandertal cervical morphology was more stable in both mid-sagittal and coronal planes. It is hypothesized that the differences in cranial size and shape in the Neandertal and modern human lineages from their Middle Pleistocene ancestors could account for some of the differences in the neck anatomy between these species. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Usability: Human Research Program - Space Human Factors and Habitability

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Holden, Kritina L.

    2009-01-01

    The Usability project addresses the need for research in the area of metrics and methodologies used in hardware and software usability testing in order to define quantifiable and verifiable usability requirements. A usability test is a human-in-the-loop evaluation where a participant works through a realistic set of representative tasks using the hardware/software under investigation. The purpose of this research is to define metrics and methodologies for measuring and verifying usability in the aerospace domain in accordance with FY09 focus on errors, consistency, and mobility/maneuverability. Usability metrics must be predictive of success with the interfaces, must be easy to obtain and/or calculate, and must meet the intent of current Human Systems Integration Requirements (HSIR). Methodologies must work within the constraints of the aerospace domain, be cost and time efficient, and be able to be applied without extensive specialized training.

  14. A study for testing the Kerr metric with AGN iron line eclipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cárdenas-Avendaño, Alejandro; Jiang, Jiachen; Bambi, Cosimo, E-mail: alejandro.cardenasa@konradlorenz.edu.co, E-mail: jcjiang12@fudan.edu.cn, E-mail: bambi@fudan.edu.cn

    2016-04-01

    Recently, two of us have studied iron line reverberation mapping to test black hole candidates, showing that the time information in reverberation mapping can better constrain the Kerr metric than the time-integrated approach. Motivated by this finding, here we explore the constraining power of another time-dependent measurement: an AGN iron line eclipse. An obscuring cloud passes between the AGN and the distant observer, covering different parts of the accretion disk at different times. Similar to the reverberation measurement, an eclipse might help to better identify the relativistic effects affecting the X-ray photons. However, this is not what we find. Inmore » our study, we employ the Johannsen-Psaltis parametrisation, but we argue that our conclusions hold in a large class of non-Kerr metrics. We explain our results pointing out an important difference between reverberation and eclipse measurements.« less

  15. Comparison of the Performance of Noise Metrics as Predictions of the Annoyance of Stage 2 and Stage 3 Aircraft Overflights

    NASA Technical Reports Server (NTRS)

    Pearsons, Karl S.; Howe, Richard R.; Sneddon, Matthew D.; Fidell, Sanford

    1996-01-01

    Thirty audiometrically screened test participants judged the relative annoyance of two comparison (variable level) and thirty-four standard (fixed level) signals in an adaptive paired comparison psychoacoustic study. The signal ensemble included both FAR Part 36 Stage 2 and 3 aircraft overflights, as well as synthesized aircraft noise signatures and other non-aircraft signals. All test signals were presented for judgment as heard indoors, in the presence of continuous background noise, under free-field listening conditions in an anechoic chamber. Analyses of the performance of 30 noise metrics as predictors of these annoyance judgments confirmed that the more complex metrics were generally more accurate and precise predictors than the simpler methods. EPNL was somewhat less accurate and precise as a predictor of the annoyance judgments than a duration-adjusted variant of Zwicker's Loudness Level.

  16. Grid Frequency Extreme Event Analysis and Modeling: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Clark, Kara; Gevorgian, Vahan

    Sudden losses of generation or load can lead to instantaneous changes in electric grid frequency and voltage. Extreme frequency events pose a major threat to grid stability. As renewable energy sources supply power to grids in increasing proportions, it becomes increasingly important to examine when and why extreme events occur to prevent destabilization of the grid. To better understand frequency events, including extrema, historic data were analyzed to fit probability distribution functions to various frequency metrics. Results showed that a standard Cauchy distribution fit the difference between the frequency nadir and prefault frequency (f_(C-A)) metric well, a standard Cauchy distributionmore » fit the settling frequency (f_B) metric well, and a standard normal distribution fit the difference between the settling frequency and frequency nadir (f_(B-C)) metric very well. Results were inconclusive for the frequency nadir (f_C) metric, meaning it likely has a more complex distribution than those tested. This probabilistic modeling should facilitate more realistic modeling of grid faults.« less

  17. On Rosen's theory of gravity and cosmology

    NASA Technical Reports Server (NTRS)

    Barnes, R. C.

    1980-01-01

    Formal similarities between general relativity and Rosen's bimetric theory of gravity were used to analyze various bimetric cosmologies. The following results were found: (1) physically plausible model universes which have a flat static background metric, have a Robertson-Walker fundamental metric, and which allow co-moving coordinates do not exist in bimetric cosmology. (2) it is difficult to use the Robertson-Walker metric for both the background metric (gamma mu nu) and the fundamental metric tensor of Riemannian geometry( g mu nu) and require that g mu nu and gamma mu nu have different time dependences. (3) A consistency relation for using co-moving coordinates in bimetric cosmology was derived. (4) Certain spatially flat bimetric cosmologies of Babala were tested for the presence of particle horizons. (5) An analytic solution for Rosen's k = +1 model was found. (6) Rosen's singularity free k = +1 model arises from what appears to be an arbitary choice for the time dependent part of gamma mu nu.

  18. Further Development of the Assessment of Military Multitasking Performance: Iterative Reliability Testing

    PubMed Central

    McCulloch, Karen L.; Radomski, Mary V.; Finkelstein, Marsha; Cecchini, Amy S.; Davidson, Leslie F.; Heaton, Kristin J.; Smith, Laurel B.; Scherer, Matthew R.

    2017-01-01

    The Assessment of Military Multitasking Performance (AMMP) is a battery of functional dual-tasks and multitasks based on military activities that target known sensorimotor, cognitive, and exertional vulnerabilities after concussion/mild traumatic brain injury (mTBI). The AMMP was developed to help address known limitations in post concussive return to duty assessment and decision making. Once validated, the AMMP is intended for use in combination with other metrics to inform duty-readiness decisions in Active Duty Service Members following concussion. This study used an iterative process of repeated interrater reliability testing and feasibility feedback to drive modifications to the 9 tasks of the original AMMP which resulted in a final version of 6 tasks with metrics that demonstrated clinically acceptable ICCs of > 0.92 (range of 0.92–1.0) for the 3 dual tasks and > 0.87 (range 0.87–1.0) for the metrics of the 3 multitasks. Three metrics involved in recording subject errors across 2 tasks did not achieve ICCs above 0.85 set apriori for multitasks (0.64) and above 0.90 set for dual-tasks (0.77 and 0.86) and were not used for further analysis. This iterative process involved 3 phases of testing with between 13 and 26 subjects, ages 18–42 years, tested in each phase from a combined cohort of healthy controls and Service Members with mTBI. Study findings support continued validation of this assessment tool to provide rehabilitation clinicians further return to duty assessment methods robust to ceiling effects with strong face validity to injured Warriors and their leaders. PMID:28056045

  19. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  20. Automatic extraction of protein point mutations using a graph bigram association.

    PubMed

    Lee, Lawrence C; Horn, Florence; Cohen, Fred E

    2007-02-02

    Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram), that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR), tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76), protein tyrosine kinases (0.72 versus 0.69), and ion channel transporters (0.76 versus 0.74). Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.

  1. Numerical model validation using experimental data: Application of the area metric on a Francis runner

    NASA Astrophysics Data System (ADS)

    Chatenet, Q.; Tahan, A.; Gagnon, M.; Chamberland-Lauzon, J.

    2016-11-01

    Nowadays, engineers are able to solve complex equations thanks to the increase of computing capacity. Thus, finite elements software is widely used, especially in the field of mechanics to predict part behavior such as strain, stress and natural frequency. However, it can be difficult to determine how a model might be right or wrong, or whether a model is better than another one. Nevertheless, during the design phase, it is very important to estimate how the hydroelectric turbine blades will behave according to the stress to which it is subjected. Indeed, the static and dynamic stress levels will influence the blade's fatigue resistance and thus its lifetime, which is a significant feature. In the industry, engineers generally use either graphic representation, hypothesis tests such as the Student test, or linear regressions in order to compare experimental to estimated data from the numerical model. Due to the variability in personal interpretation (reproducibility), graphical validation is not considered objective. For an objective assessment, it is essential to use a robust validation metric to measure the conformity of predictions against data. We propose to use the area metric in the case of a turbine blade that meets the key points of the ASME Standards and produces a quantitative measure of agreement between simulations and empirical data. This validation metric excludes any belief and criterion of accepting a model which increases robustness. The present work is aimed at applying a validation method, according to ASME V&V 10 recommendations. Firstly, the area metric is applied on the case of a real Francis runner whose geometry and boundaries conditions are complex. Secondly, the area metric will be compared to classical regression methods to evaluate the performance of the method. Finally, we will discuss the use of the area metric as a tool to correct simulations.

  2. Health Technology Assessment for Molecular Diagnostics: Practices, Challenges, and Recommendations from the Medical Devices and Diagnostics Special Interest Group.

    PubMed

    Garfield, Susan; Polisena, Julie; S Spinner, Daryl; Postulka, Anne; Y Lu, Christine; Tiwana, Simrandeep K; Faulkner, Eric; Poulios, Nick; Zah, Vladimir; Longacre, Michael

    2016-01-01

    Health technology assessments (HTAs) are increasingly used to inform coverage, access, and utilization of medical technologies including molecular diagnostics (MDx). Although MDx are used to screen patients and inform disease management and treatment decisions, there is no uniform approach to their evaluation by HTA organizations. The International Society for Pharmacoeconomics and Outcomes Research Devices and Diagnostics Special Interest Group reviewed diagnostic-specific HTA programs and identified elements representing common and best practices. MDx-specific HTA programs in Europe, Australia, and North America were characterized by methodology, evaluation framework, and impact. Published MDx HTAs were reviewed, and five representative case studies of test evaluations were developed: United Kingdom (National Institute for Health and Care Excellence's Diagnostics Assessment Programme, epidermal growth factor receptor tyrosine kinase mutation), United States (Palmetto's Molecular Diagnostic Services Program, OncotypeDx prostate cancer test), Germany (Institute for Quality and Efficiency in Healthcare, human papillomavirus testing), Australia (Medical Services Advisory Committee, anaplastic lymphoma kinase testing for non-small cell lung cancer), and Canada (Canadian Agency for Drugs and Technologies in Health, Rapid Response: Non-invasive Prenatal Testing). Overall, the few HTA programs that have MDx-specific methods do not provide clear parameters of acceptability related to clinical and analytic performance, clinical utility, and economic impact. The case studies highlight similarities and differences in evaluation approaches across HTAs in the performance metrics used (analytic and clinical validity, clinical utility), evidence requirements, and how value is measured. Not all HTAs are directly linked to reimbursement outcomes. To improve MDx HTAs, organizations should provide greater transparency, better communication and collaboration between industry and HTA stakeholders, clearer links between HTA and funding decisions, explicit recognition of and rationale for differential approaches to laboratory-developed versus regulatory-approved test, and clear evidence requirements. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Scoring Situational Judgment Tests Using Profile Similarity Metrics

    DTIC Science & Technology

    2010-07-01

    dependability, openness and agreeableness (cf. Yukl, 2002; Bartone, Snook, & Tremble, 2002; Bartone, Eid, Johnsen, Laberg , & Snook, 2009). This reasoning led...provides a continuous scale, and allows the respondent to register subtle differences in their understandings ( Stevens , 1975). Figure 3 portrays...2002; Bartone, Snook, & Tremble, 2002; Bartone, Eid, Johnsen, Laberg , & Snook, 2009), we expected that LKT metrics would also correlate with

  4. 40 CFR 60.85 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the following equation: E = (CQsd) / (PK) where: E = emission rate of acid mist or SO2 kg/metric ton... = volumetric flow rate of the effluent gas, dscm/hr (dscf/hr). P = production rate of 100 percent H2SO4, metric ton/hr (ton/hr). K = conversion factor, 1000 g/kg (1.0 lb/lb). (2) Method 8 shall be used to determine...

  5. 40 CFR 60.74 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... production rate, metric ton/hr (ton/hr) or 100 percent nitric acid. K=conversion factor, 1000 g/kg (1.0 lb/lb...=emission rate of NOX as NO2, kg/metric ton (lb/ton) of 100 percent nitric acid. Cs = concentration of NOX... over the production system shall be used to confirm the production rate. (c) The owner or operator may...

  6. 40 CFR 60.74 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rate, metric ton/hr (ton/hr) or 100 percent nitric acid. K=conversion factor, 1000 g/kg (1.0 lb/lb). (2...=emission rate of NOX as NO2, kg/metric ton (lb/ton) of 100 percent nitric acid. Cs=concentration of NOX as... system shall be used to confirm the production rate. (c) The owner or operator may use the following as...

  7. 40 CFR 60.74 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... rate, metric ton/hr (ton/hr) or 100 percent nitric acid. K=conversion factor, 1000 g/kg (1.0 lb/lb). (2...=emission rate of NOX as NO2, kg/metric ton (lb/ton) of 100 percent nitric acid. Cs=concentration of NOX as... system shall be used to confirm the production rate. (c) The owner or operator may use the following as...

  8. 40 CFR 60.74 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... rate, metric ton/hr (ton/hr) or 100 percent nitric acid. K=conversion factor, 1000 g/kg (1.0 lb/lb). (2...=emission rate of NOX as NO2, kg/metric ton (lb/ton) of 100 percent nitric acid. Cs=concentration of NOX as... system shall be used to confirm the production rate. (c) The owner or operator may use the following as...

  9. A Metric to Evaluate Mobile Satellite Systems

    NASA Technical Reports Server (NTRS)

    Young, Elizabeth L.

    1997-01-01

    The concept of a "cost per billable minute" methodology to analyze mobile satellite systems is reviewed. Certain assumptions, notably those about the marketplace and regulatory policies, may need to be revisited. Fading and power control assumptions need to be tested. Overall, the metric would seem to have value in the design phase of a system and for comparisons between and among alternative systems.

  10. Time-Indexed Effect Size Metric for K-12 Reading and Math Education Evaluation

    ERIC Educational Resources Information Center

    Lee, Jaekyung; Finn, Jeremy; Liu, Xiaoyan

    2011-01-01

    Through a synthesis of test publisher norms and national longitudinal datasets, this study provides new national norms of academic growth in K-12 reading and math that can be used to reinterpret conventional effect sizes in time units. We propose d' a time-indexed effect size metric to estimate how long it would take for an "untreated"…

  11. Sensitivity, Specificity, and Predictive Values: Foundations, Pliabilities, and Pitfalls in Research and Practice

    PubMed Central

    Trevethan, Robert

    2017-01-01

    Within the context of screening tests, it is important to avoid misconceptions about sensitivity, specificity, and predictive values. In this article, therefore, foundations are first established concerning these metrics along with the first of several aspects of pliability that should be recognized in relation to those metrics. Clarification is then provided about the definitions of sensitivity, specificity, and predictive values and why researchers and clinicians can misunderstand and misrepresent them. Arguments are made that sensitivity and specificity should usually be applied only in the context of describing a screening test’s attributes relative to a reference standard; that predictive values are more appropriate and informative in actual screening contexts, but that sensitivity and specificity can be used for screening decisions about individual people if they are extremely high; that predictive values need not always be high and might be used to advantage by adjusting the sensitivity and specificity of screening tests; that, in screening contexts, researchers should provide information about all four metrics and how they were derived; and that, where necessary, consumers of health research should have the skills to interpret those metrics effectively for maximum benefit to clients and the healthcare system. PMID:29209603

  12. Detailed Vibration Analysis of Pinion Gear with Time-Frequency Methods

    NASA Technical Reports Server (NTRS)

    Mosher, Marianne; Pryor, Anna H.; Lewicki, David G.

    2003-01-01

    In this paper, the authors show a detailed analysis of the vibration signal from the destructive testing of a spiral bevel gear and pinion pair containing seeded faults. The vibration signal is analyzed in the time domain, frequency domain and with four time-frequency transforms: the Short Time Frequency Transform (STFT), the Wigner-Ville Distribution with the Choi-Williams kernel (WV-CW), the Continuous Wavelet' Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels and damage conditions, are analyzed using these methods. A new metric for automatic anomaly detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the time-frequency transforms, as well as time and frequency representations, on this data set. Analysis with the CWT detects changes in the signal at low torque levels not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic anomaly detection and to develop fault detection methods for the metric.

  13. Linkage of the Third National Health and Nutrition Examination Survey to air quality data.

    PubMed

    Kravets, Nataliya; Parker, Jennifer D

    2008-11-01

    This report describes the linked data file obtained as a result of combining air pollution data and National Health and Nutrition Examination Survey (NHANES) III data. Average annual air pollution exposures to particulate matter consisting of particles smaller than 10 micrometers in diameter (PM10), sulfur dioxide (SO2), nitrogen dioxide (NO2), and carbon monoxide (CO) were created for NHANES III examined persons by averaging values from monitors within a 5-, 10-, 15-, and 20-mile radius from the block-group centroid of their residence and in the county of their residence. Percentage records geocoded to block-group level, percentage records linked to air pollution, and distributions of exposure values were estimated for the total sample and various demographic groups. The percentages of respondents who were assigned countywide air pollution values ranges from a low of 43 percent in the case of NO2 data to a high of 68 percent in the case of PM10 data. Among the pollutants considered, PM10 data provides the best coverage. Of all the metrics created, the highest coverage is achieved by averaging readings of monitors located within a 20-mile distance from the centroid of respondents' block groups. Among the demographic variables analyzed, differences in air pollution coverage and exposure levels occur most often among groups defined by race and Hispanic origin, region, and county level of urbanization. However, differences among groups depend on the pollutant and geographic linkage method. The linked dataset provides researchers with opportunities to investigate the relationship between air pollution and various health outcomes.

  14. Robustness Metrics: How Are They Calculated, When Should They Be Used and Why Do They Give Different Results?

    NASA Astrophysics Data System (ADS)

    McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.

    2018-02-01

    Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.

  15. Applying the disability-adjusted life year to track health impact of social franchise programs in low- and middle-income countries

    PubMed Central

    2013-01-01

    Background Developing effective methods for measuring the health impact of social franchising programs is vital for demonstrating the value of this innovative service delivery model, particularly given its rapid expansion worldwide. Currently, these programs define success through patient volume and number of outlets, widely acknowledged as poor reflections of true program impact. An existing metric, the disability-adjusted life years averted (DALYs averted), offers promise as a measure of projected impact. Country-specific and service-specific, DALYs averted enables impact comparisons between programs operating in different contexts. This study explores the use of DALYs averted as a social franchise performance metric. Methods Using data collected by the Social Franchising Compendia in 2010 and 2011, we compared franchise performance, analyzing by region and program area. Coefficients produced by Population Services International converted each franchise's service delivery data into DALYs averted. For the 32 networks with two years of data corresponding to these metrics, a paired t-test compared all metrics. Finally, to test data reporting quality, we compared services provided to patient volume. Results Social franchising programs grew considerably from 2010 to 2011, measured by services provided (215%), patient volume (31%), and impact (couple-years of protection (CYPs): 86% and DALYs averted: 519%), but not by the total number of outlets. Non-family planning services increased by 857%, with diversification centered in Asia and Africa. However, paired t-test comparisons showed no significant increase within the networks, whether categorized as family planning or non-family planning. The ratio of services provided to patient visits yielded considerable range, with one network reporting a ratio of 16,000:1. Conclusion In theory, the DALYs averted metric is a more robust and comprehensive metric for social franchising than current program measures. As social franchising spreads beyond family planning, having a metric that captures the impact of a range of diverse services and allows comparisons will be increasingly important. However, standardizing reporting will be essential to make such comparisons useful. While not widespread, errors in self-reported data appear to have included social marketing distribution data in social franchising reporting, requiring clearer data collection and reporting guidelines. Differences noted above must be interpreted cautiously as a result. PMID:23902679

  16. Applying the disability-adjusted life year to track health impact of social franchise programs in low- and middle-income countries.

    PubMed

    Montagu, Dominic; Ngamkitpaiboon, Lek; Duvall, Susan; Ratcliffe, Amy

    2013-01-01

    Developing effective methods for measuring the health impact of social franchising programs is vital for demonstrating the value of this innovative service delivery model, particularly given its rapid expansion worldwide. Currently, these programs define success through patient volume and number of outlets, widely acknowledged as poor reflections of true program impact. An existing metric, the disability-adjusted life years averted (DALYs averted), offers promise as a measure of projected impact. Country-specific and service-specific, DALYs averted enables impact comparisons between programs operating in different contexts. This study explores the use of DALYs averted as a social franchise performance metric. Using data collected by the Social Franchising Compendia in 2010 and 2011, we compared franchise performance, analyzing by region and program area. Coefficients produced by Population Services International converted each franchise's service delivery data into DALYs averted. For the 32 networks with two years of data corresponding to these metrics, a paired t-test compared all metrics. Finally, to test data reporting quality, we compared services provided to patient volume. Social franchising programs grew considerably from 2010 to 2011, measured by services provided (215%), patient volume (31%), and impact (couple-years of protection (CYPs): 86% and DALYs averted: 519%), but not by the total number of outlets. Non-family planning services increased by 857%, with diversification centered in Asia and Africa. However, paired t-test comparisons showed no significant increase within the networks, whether categorized as family planning or non-family planning. The ratio of services provided to patient visits yielded considerable range, with one network reporting a ratio of 16,000:1. In theory, the DALYs averted metric is a more robust and comprehensive metric for social franchising than current program measures. As social franchising spreads beyond family planning, having a metric that captures the impact of a range of diverse services and allows comparisons will be increasingly important. However, standardizing reporting will be essential to make such comparisons useful. While not widespread, errors in self-reported data appear to have included social marketing distribution data in social franchising reporting, requiring clearer data collection and reporting guidelines. Differences noted above must be interpreted cautiously as a result.

  17. Magnification effect of Kerr metric by configurations of collisionless particles in non-isotropic kinetic equilibria

    NASA Astrophysics Data System (ADS)

    Cremaschini, Claudio; Stuchlík, Zdeněk

    2018-05-01

    A test fluid composed of relativistic collisionless neutral particles in the background of Kerr metric is expected to generate non-isotropic equilibrium configurations in which the corresponding stress-energy tensor exhibits pressure and temperature anisotropies. This arises as a consequence of the constraints placed on single-particle dynamics by Killing tensor symmetries, leading to a peculiar non-Maxwellian functional form of the kinetic distribution function describing the continuum system. Based on this outcome, in this paper the generation of Kerr-like metric by collisionless N -body systems of neutral matter orbiting in the field of a rotating black hole is reported. The result is obtained in the framework of covariant kinetic theory by solving the Einstein equations in terms of an analytical perturbative treatment whereby the gravitational field is decomposed as a prescribed background metric tensor described by the Kerr solution plus a self-field correction. The latter one is generated by the uncharged fluid at equilibrium and satisfies the linearized Einstein equations having the non-isotropic stress-energy tensor as source term. It is shown that the resulting self-metric is again of Kerr type, providing a mechanism of magnification of the background metric tensor and its qualitative features.

  18. Toward objective image quality metrics: the AIC Eval Program of the JPEG

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Larabi, Chaker

    2008-08-01

    Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.

  19. Technical Interchange Meeting Guidelines Breakout

    NASA Technical Reports Server (NTRS)

    Fong, Rob

    2002-01-01

    Along with concept developers, the Systems Evaluation and Assessment (SEA) sub-element of VAMS will develop those scenarios and metrics required for testing the new concepts that reside within the System-Level Integrated Concepts (SLIC) sub-element in the VAMS project. These concepts will come from the NRA process, space act agreements, a university group, and other NASA researchers. The emphasis of those concepts is to increase capacity while at least maintaining the current safety level. The concept providers will initially develop their own scenarios and metrics for self-evaluation. In about a year, the SEA sub-element will become responsible for conducting initial evaluations of the concepts using a common scenario and metric set. This set may derive many components from the scenarios and metrics used by the concept providers. Ultimately, the common scenario\\metric set will be used to help determine the most feasible and beneficial concepts. A set of 15 questions and issues, discussed below, pertaining to the scenario and metric set, and its use for assessing concepts, was submitted by the SEA sub-element for consideration during the breakout session. The questions were divided among the three breakout groups. Each breakout group deliberated on its set of questions and provided a report on its discussion.

  20. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  1. Auralization of NASA N+2 Aircraft Concepts from System Noise Predictions

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Burley, Casey L.; Thomas, Russel H.

    2016-01-01

    Auralization of aircraft flyover noise provides an auditory experience that complements integrated metrics obtained from system noise predictions. Recent efforts have focused on auralization methods development, specifically the process by which source noise information obtained from semi-empirical models, computational aeroacoustic analyses, and wind tunnel and flight test data, are used for simulated flyover noise at a receiver on the ground. The primary focus of this work, however, is to develop full vehicle auralizations in order to explore the distinguishing features of NASA's N+2 aircraft vis-à-vis current fleet reference vehicles for single-aisle and large twin-aisle classes. Some features can be seen in metric time histories associated with aircraft noise certification, e.g., tone-corrected perceived noise level used in the calculation of effective perceived noise level. Other features can be observed in sound quality metrics, e.g., loudness, sharpness, roughness, fluctuation strength and tone-to-noise ratio. A psychoacoustic annoyance model is employed to establish the relationship between sound quality metrics and noise certification metrics. Finally, the auralizations will serve as the basis for a separate psychoacoustic study aimed at assessing how well aircraft noise certification metrics predict human annoyance for these advanced vehicle concepts.

  2. Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.

    PubMed

    Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K

    2018-06-15

    The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.

  3. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  4. Influenza Vaccination Coverage Rate according to the Pulmonary Function of Korean Adults Aged 40 Years and Over: Analysis of the Fifth Korean National Health and Nutrition Examination Survey

    PubMed Central

    2016-01-01

    Influenza vaccination is an effective strategy to reduce morbidity and mortality, particularly for those who have decreased lung functions. This study was to identify the factors that affect vaccination coverage according to the results of pulmonary function tests depending on the age. In this cross-sectional study, data were obtained from 3,224 adults over the age of 40 who participated in the fifth National Health and Nutrition Examination Survey and underwent pulmonary function testing in 2012. To identify the factors that affect vaccination rate, logistic regression analysis was conducted after dividing the subjects into two groups based on the age of 65. Influenza vaccination coverage of the entire subjects was 45.2%, and 76.8% for those aged 65 and over. The group with abnormal pulmonary function had a higher vaccination rate than the normal group, but any pulmonary dysfunction or history of COPD did not affect the vaccination coverage in the multivariate analysis. The subjects who were 40-64 years-old had higher vaccination coverage when they were less educated or with restricted activity level, received health screenings, and had chronic diseases. Those aged 65 and over had significantly higher vaccination coverage only when they received regular health screenings. Any pulmonary dysfunction or having COPD showed no significant correlation with the vaccination coverage in the Korean adult population. PMID:27134491

  5. Reconstructing the metric of the local Universe from number counts observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallejo, Sergio Andres; Romano, Antonio Enea, E-mail: antonio.enea.romano@cern.ch

    Number counts observations available with new surveys such as the Euclid mission will be an important source of information about the metric of the Universe. We compute the low red-shift expansion for the energy density and the density contrast using an exact spherically symmetric solution in presence of a cosmological constant. At low red-shift the expansion is more precise than linear perturbation theory prediction. We then use the local expansion to reconstruct the metric from the monopole of the density contrast. We test the inversion method using numerical calculations and find a good agreement within the regime of validity ofmore » the red-shift expansion. The method could be applied to observational data to reconstruct the metric of the local Universe with a level of precision higher than the one achievable using perturbation theory.« less

  6. Photogrammetry using Apollo 16 orbital photography, part B

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.

    1972-01-01

    Discussion is made of the Apollo 15 and 16 metric and panoramic cameras which provided photographs for accurate topographic portrayal of the lunar surface using photogrammetric methods. Nine stereoscopic models of Apollo 16 metric photographs and three models of panoramic photographs were evaluated photogrammetrically in support of the Apollo 16 geologic investigations. Four of the models were used to collect profile data for crater morphology studies; three models were used to collect evaluation data for the frequency distributions of lunar slopes; one model was used to prepare a map of the Apollo 16 traverse area; and one model was used to determine elevations of the Cayley Formation. The remaining three models were used to test photogrammetric techniques using oblique metric and panoramic camera photographs. Two preliminary contour maps were compiled and a high-oblique metric photograph was rectified.

  7. Is there a clinical benefit with a smooth compensator design compared with a plunged compensator design for passive scattered protons?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabibian, Art A., E-mail: art.tabibian@gmail.com; Powers, Adam; Dolormente, Keith

    In proton therapy, passive scattered proton plans use compensators to conform the dose to the distal surface of the planning volume. These devices are custom made from acrylic or wax for each treatment field using either a plunge-drilled or smooth-milled compensator design. The purpose of this study was to investigate if there is a clinical benefit of generating passive scattered proton radiation treatment plans with the smooth compensator design. We generated 4 plans with different techniques using the smooth compensators. We chose 5 sites and 5 patients for each site for the range of dosimetric effects to show adequate sample.more » The plans were compared and evaluated using multicriteria (MCA) plan quality metrics for plan assessment and comparison using the Quality Reports [EMR] technology by Canis Lupus LLC. The average absolute difference for dosimetric metrics from the plunged-depth plan ranged from −4.7 to +3.0 and the average absolute performance results ranged from −6.6% to +3%. The manually edited smooth compensator plan yielded the best dosimetric metric, +3.0, and performance, + 3.0% compared to the plunged-depth plan. It was also superior to the other smooth compensator plans. Our results indicate that there are multiple approaches to achieve plans with smooth compensators similar to the plunged-depth plans. The smooth compensators with manual compensator edits yielded equal or better target coverage and normal tissue (NT) doses compared with the other smooth compensator techniques. Further studies are under investigation to evaluate the robustness of the smooth compensator design.« less

  8. Spatial partitioning of environmental correlates of avian biodiversity in the conterminous United States

    USGS Publications Warehouse

    O'Connor, R.J.; Jones, M.T.; White, D.; Hunsaker, C.; Loveland, Tom; Jones, Bruce; Preston, E.

    1996-01-01

    Classification and regression tree (CART) analysis was used to create hierarchically organized models of the distribution of bird species richness across the conterminous United States. Species richness data were taken from the Breeding Bird Survey and were related to climatic and land use data. We used a systematic spatial grid of approximately 12,500 hexagons, each approximately 640 square kilometres in area. Within each hexagon land use was characterized by the Loveland et al. land cover classification based on Advanced Very High Resolution Radiometer (AVHRR) data from NOAA polar orbiting meteorological satellites. These data were aggregated to yield fourteen land classes equivalent to an Anderson level II coverage; urban areas were added from the Digital Chart of the World. Each hexagon was characterized by climate data and landscape pattern metrics calculated from the land cover. A CART model then related the variation in species richness across the 1162 hexagons for which bird species richness data were available to the independent variables, yielding an R2-type goodness of fit metric of 47.5% deviance explained. The resulting model recognized eleven groups of hexagons, with species richness within each group determined by unique sequences of hierarchically constrained independent variables. Within the hierarchy, climate data accounted for more variability in the bird data, followed by land cover proportion, and then pattern metrics. The model was then used to predict species richness in all 12,500 hexagons of the conterminous United States yielding a map of the distribution of these eleven classes of bird species richness as determined by the environmental correlates. The potential for using this technique to interface biogeographic theory with the hierarchy theory of ecology is discussed. ?? 1996 Blackwell Science Ltd.

  9. A neural networks-based hybrid routing protocol for wireless mesh networks.

    PubMed

    Kojić, Nenad; Reljin, Irini; Reljin, Branimir

    2012-01-01

    The networking infrastructure of wireless mesh networks (WMNs) is decentralized and relatively simple, but they can display reliable functioning performance while having good redundancy. WMNs provide Internet access for fixed and mobile wireless devices. Both in urban and rural areas they provide users with high-bandwidth networks over a specific coverage area. The main problems affecting these networks are changes in network topology and link quality. In order to provide regular functioning, the routing protocol has the main influence in WMN implementations. In this paper we suggest a new routing protocol for WMN, based on good results of a proactive and reactive routing protocol, and for that reason it can be classified as a hybrid routing protocol. The proposed solution should avoid flooding and creating the new routing metric. We suggest the use of artificial logic-i.e., neural networks (NNs). This protocol is based on mobile agent technologies controlled by a Hopfield neural network. In addition to this, our new routing metric is based on multicriteria optimization in order to minimize delay and blocking probability (rejected packets or their retransmission). The routing protocol observes real network parameters and real network environments. As a result of artificial logic intelligence, the proposed routing protocol should maximize usage of network resources and optimize network performance.

  10. A Neural Networks-Based Hybrid Routing Protocol for Wireless Mesh Networks

    PubMed Central

    Kojić, Nenad; Reljin, Irini; Reljin, Branimir

    2012-01-01

    The networking infrastructure of wireless mesh networks (WMNs) is decentralized and relatively simple, but they can display reliable functioning performance while having good redundancy. WMNs provide Internet access for fixed and mobile wireless devices. Both in urban and rural areas they provide users with high-bandwidth networks over a specific coverage area. The main problems affecting these networks are changes in network topology and link quality. In order to provide regular functioning, the routing protocol has the main influence in WMN implementations. In this paper we suggest a new routing protocol for WMN, based on good results of a proactive and reactive routing protocol, and for that reason it can be classified as a hybrid routing protocol. The proposed solution should avoid flooding and creating the new routing metric. We suggest the use of artificial logic—i.e., neural networks (NNs). This protocol is based on mobile agent technologies controlled by a Hopfield neural network. In addition to this, our new routing metric is based on multicriteria optimization in order to minimize delay and blocking probability (rejected packets or their retransmission). The routing protocol observes real network parameters and real network environments. As a result of artificial logic intelligence, the proposed routing protocol should maximize usage of network resources and optimize network performance. PMID:22969360

  11. Possible causes of data model discrepancy in the temperature history of the last Millennium.

    PubMed

    Neukom, Raphael; Schurer, Andrew P; Steiger, Nathan J; Hegerl, Gabriele C

    2018-05-15

    Model simulations and proxy-based reconstructions are the main tools for quantifying pre-instrumental climate variations. For some metrics such as Northern Hemisphere mean temperatures, there is remarkable agreement between models and reconstructions. For other diagnostics, such as the regional response to volcanic eruptions, or hemispheric temperature differences, substantial disagreements between data and models have been reported. Here, we assess the potential sources of these discrepancies by comparing 1000-year hemispheric temperature reconstructions based on real-world paleoclimate proxies with climate-model-based pseudoproxies. These pseudoproxy experiments (PPE) indicate that noise inherent in proxy records and the unequal spatial distribution of proxy data are the key factors in explaining the data-model differences. For example, lower inter-hemispheric correlations in reconstructions can be fully accounted for by these factors in the PPE. Noise and data sampling also partly explain the reduced amplitude of the response to external forcing in reconstructions compared to models. For other metrics, such as inter-hemispheric differences, some, although reduced, discrepancy remains. Our results suggest that improving proxy data quality and spatial coverage is the key factor to increase the quality of future climate reconstructions, while the total number of proxy records and reconstruction methodology play a smaller role.

  12. Visibility of medical informatics regarding bibliometric indices and databases

    PubMed Central

    2011-01-01

    Background The quantitative study of the publication output (bibliometrics) deeply influences how scientific work is perceived (bibliometric visibility). Recently, new bibliometric indices and databases have been established, which may change the visibility of disciplines, institutions and individuals. This study examines the effects of the new indices on the visibility of Medical Informatics. Methods By objective criteria, three sets of journals are chosen, two representing Medical Informatics and a third addressing Internal Medicine as a benchmark. The availability of index data (index coverage) and the aggregate scores of these corpora are compared for journal-related (Journal impact factor, Eigenfactor metrics, SCImago journal rank) and author-related indices (Hirsch-index, Egghes G-index). Correlation analysis compares the dependence of author-related indices. Results The bibliometric visibility depended on the research focus and the citation database: Scopus covers more journals relevant for Medical Informatics than ISI/Thomson Reuters. Journals focused on Medical Informatics' methodology were negatively affected by the Eigenfactor metrics, while the visibility profited from an interdisciplinary research focus. The correlation between Hirsch-indices computed on citation databases and the Internet was strong. Conclusions The visibility of smaller technology-oriented disciplines like Medical Informatics is changed by the new bibliometric indices and databases possibly leading to suitably changed publication strategies. Freely accessible author-related indices enable an easy and adequate individual assessment. PMID:21496230

  13. Ares I Static Tests Design

    NASA Technical Reports Server (NTRS)

    Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.

    2009-01-01

    Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.

  14. Metrics for the National SCADA Test Bed Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, Philip A.; Mortensen, J.; Dagle, Jeffery E.

    2008-12-05

    The U.S. Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) National SCADA Test Bed (NSTB) Program is providing valuable inputs into the electric industry by performing topical research and development (R&D) to secure next generation and legacy control systems. In addition, the program conducts vulnerability and risk analysis, develops tools, and performs industry liaison, outreach and awareness activities. These activities will enhance the secure and reliable delivery of energy for the United States. This report will describe metrics that could be utilized to provide feedback to help enhance the effectiveness of the NSTB Program.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broderick, Avery E.; Johannsen, Tim; Loeb, Abraham

    The advent of the Event Horizon Telescope (EHT), a millimeter-wave very long baseline interferometric array, has enabled spatially resolved studies of the subhorizon-scale structure for a handful of supermassive black holes. Among these, the supermassive black hole at the center of the Milky Way, Sagittarius A* (Sgr A*), presents the largest angular cross section. Thus far, these studies have focused on measurements of the black hole spin and the validation of low-luminosity accretion models. However, a critical input in the analysis of EHT data is the structure of the black hole spacetime, and thus these observations provide the novel opportunitymore » to test the applicability of the Kerr metric to astrophysical black holes. Here we present the first simulated images of a radiatively inefficient accretion flow (RIAF) around Sgr A* employing a quasi-Kerr metric that contains an independent quadrupole moment in addition to the mass and spin that fully characterize a black hole in general relativity. We show that these images can be significantly different from the images of an RIAF around a Kerr black hole with the same spin and demonstrate the feasibility of testing the no-hair theorem by constraining the quadrupolar deviation from the Kerr metric with existing EHT data. Equally important, we find that the disk inclination and spin orientation angles are robust to the inclusion of additional parameters, providing confidence in previous estimations assuming the Kerr metric based on EHT observations. However, at present, the limits on potential modifications of the Kerr metric remain weak.« less

  16. Effect of respiratory and cardiac gating on the major diffusion-imaging metrics.

    PubMed

    Hamaguchi, Hiroyuki; Tha, Khin Khin; Sugimori, Hiroyuki; Nakanishi, Mitsuhiro; Nakagawa, Shin; Fujiwara, Taro; Yoshida, Hirokazu; Takamori, Sayaka; Shirato, Hiroki

    2016-08-01

    The effect of respiratory gating on the major diffusion-imaging metrics and that of cardiac gating on mean kurtosis (MK) are not known. For evaluation of whether the major diffusion-imaging metrics-MK, fractional anisotropy (FA), and mean diffusivity (MD) of the brain-varied between gated and non-gated acquisitions, respiratory-gated, cardiac-gated, and non-gated diffusion-imaging of the brain were performed in 10 healthy volunteers. MK, FA, and MD maps were constructed for all acquisitions, and the histograms were constructed. The normalized peak height and location of the histograms were compared among the acquisitions by use of Friedman and post hoc Wilcoxon tests. The effect of the repetition time (TR) on the diffusion-imaging metrics was also tested, and we corrected for its variation among acquisitions, if necessary. The results showed a shift in the peak location of the MK and MD histograms to the right with an increase in TR (p ≤ 0.01). The corrected peak location of the MK histograms, the normalized peak height of the FA histograms, the normalized peak height and the corrected peak location of the MD histograms varied significantly between the gated and non-gated acquisitions (p < 0.05). These results imply an influence of respiration and cardiac pulsation on the major diffusion-imaging metrics. The gating conditions must be kept identical if reproducible results are to be achieved. © The Author(s) 2016.

  17. An Evaluation of Output Signal to Noise Ratio as a Predictor of Cochlear Implant Speech Intelligibility.

    PubMed

    Watkins, Greg D; Swanson, Brett A; Suaning, Gregg J

    2018-02-22

    Cochlear implant (CI) sound processing strategies are usually evaluated in clinical studies involving experienced implant recipients. Metrics which estimate the capacity to perceive speech for a given set of audio and processing conditions provide an alternative means to assess the effectiveness of processing strategies. The aim of this research was to assess the ability of the output signal to noise ratio (OSNR) to accurately predict speech perception. It was hypothesized that compared with the other metrics evaluated in this study (1) OSNR would have equivalent or better accuracy and (2) OSNR would be the most accurate in the presence of variable levels of speech presentation. For the first time, the accuracy of OSNR as a metric which predicts speech intelligibility was compared, in a retrospective study, with that of the input signal to noise ratio (ISNR) and the short-term objective intelligibility (STOI) metric. Because STOI measured audio quality at the input to a CI sound processor, a vocoder was applied to the sound processor output and STOI was also calculated for the reconstructed audio signal (vocoder short-term objective intelligibility [VSTOI] metric). The figures of merit calculated for each metric were Pearson correlation of the metric and a psychometric function fitted to sentence scores at each predictor value (Pearson sigmoidal correlation [PSIG]), epsilon insensitive root mean square error (RMSE*) of the psychometric function and the sentence scores, and the statistical deviance of the fitted curve to the sentence scores (D). Sentence scores were taken from three existing data sets of Australian Sentence Tests in Noise results. The AuSTIN tests were conducted with experienced users of the Nucleus CI system. The score for each sentence was the proportion of morphemes the participant correctly repeated. In data set 1, all sentences were presented at 65 dB sound pressure level (SPL) in the presence of four-talker Babble noise. Each block of sentences used an adaptive procedure, with the speech presented at a fixed level and the ISNR varied. In data set 2, sentences were presented at 65 dB SPL in the presence of stationary speech weighted noise, street-side city noise, and cocktail party noise. An adaptive ISNR procedure was used. In data set 3, sentences were presented at levels ranging from 55 to 89 dB SPL with two automatic gain control configurations and two fixed ISNRs. For data set 1, the ISNR and OSNR were equally most accurate. STOI was significantly different for deviance (p = 0.045) and RMSE* (p < 0.001). VSTOI was significantly different for RMSE* (p < 0.001). For data set 2, ISNR and OSNR had an equivalent accuracy which was significantly better than that of STOI for PSIG (p = 0.029) and VSTOI for deviance (p = 0.001), RMSE*, and PSIG (both p < 0.001). For data set 3, OSNR was the most accurate metric and was significantly more accurate than VSTOI for deviance, RMSE*, and PSIG (all p < 0.001). ISNR and STOI were unable to predict the sentence scores for this data set. The study results supported the hypotheses. OSNR was found to have an accuracy equivalent to or better than ISNR, STOI, and VSTOI for tests conducted at a fixed presentation level and variable ISNR. OSNR was a more accurate metric than VSTOI for tests with fixed ISNRs and variable presentation levels. Overall, OSNR was the most accurate metric across the three data sets. OSNR holds promise as a prediction metric which could potentially improve the effectiveness of sound processor research and CI fitting.

  18. SU-E-T-450: How Important Is a Reproducible Breath Hold for DIBH Breast Radiotherapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Wentworth, S; Sintay, B

    Purpose: Deep inspiration breath hold (DIBH) for left-sided breast cancer has been shown to reduce heart dose. Surface imaging helps to ensure accurate breast positioning, but does not guarantee a reproducible breath hold (BH) at DIBH treatments. We examine the effects of variable BH positions for DIBH treatments. Methods: Twenty-Five patients with free breathing (FB) and DIBH scans were reviewed. Four plans were created for each patient: 1) FB, 2) DIBH, 3) FB-DIBH – the DIBH plans were copied to the FB images and recalculated (image registration was based on breast tissue), and 4) P-DIBH – a partial BH withmore » the heart shifted midway between the FB and DIBH positions. The FB-DIBH plans give “worst case” scenarios for surface imaging DIBH, where the breast is aligned by surface imaging but the patient is not holding their breath. Students t-tests were used to compare dose metrics. Results: The DIBH plans gave lower heart dose and comparable breast coverage versus FB in all cases. The FB-DIBH plans showed no significant difference versus FB plans for breast coverage, mean heart dose, or maximum heart dose (p >= 0.10). The mean heart dose differed between FB-DIBH and FB by < 2 Gy for all cases, the maximum heart dose differed by < 2 Gy for 21 cases. The P-DIBH plans showed significantly lower mean heart dose than FB (p = 0.01). The mean heart doses for the P-DIBH plans were < FB for 22 cases, the maximum dose < FB for 18 cases. Conclusions: A DIBH plan delivered to a FB patient set-up with surface imaging will yield similar dosimetry to a plan created and delivered FB. A DIBH plan delivered with even a partial BH can give reduced heart dose compared to FB techniques when the breast tissue is well aligned.« less

  19. Performance comparison of two commercial human whole-exome capture systems on formalin-fixed paraffin-embedded lung adenocarcinoma samples.

    PubMed

    Bonfiglio, Silvia; Vanni, Irene; Rossella, Valeria; Truini, Anna; Lazarevic, Dejan; Dal Bello, Maria Giovanna; Alama, Angela; Mora, Marco; Rijavec, Erika; Genova, Carlo; Cittaro, Davide; Grossi, Francesco; Coco, Simona

    2016-08-30

    Next Generation Sequencing (NGS) has become a valuable tool for molecular landscape characterization of cancer genomes, leading to a better understanding of tumor onset and progression, and opening new avenues in translational oncology. Formalin-fixed paraffin-embedded (FFPE) tissue is the method of choice for storage of clinical samples, however low quality of FFPE genomic DNA (gDNA) can limit its use for downstream applications. To investigate the FFPE specimen suitability for NGS analysis and to establish the performance of two solution-based exome capture technologies, we compared the whole-exome sequencing (WES) data of gDNA extracted from 5 fresh frozen (FF) and 5 matched FFPE lung adenocarcinoma tissues using: SeqCap EZ Human Exome v.3.0 (Roche NimbleGen) and SureSelect XT Human All Exon v.5 (Agilent Technologies). Sequencing metrics on Illumina HiSeq were optimal for both exome systems and comparable among FFPE and FF samples, with a slight increase of PCR duplicates in FFPE, mainly in Roche NimbleGen libraries. Comparison of single nucleotide variants (SNVs) between FFPE-FF pairs reached overlapping values >90 % in both systems. Both WES showed high concordance with target re-sequencing data by Ion PGM™ in 22 lung-cancer genes, regardless the source of samples. Exon coverage of 623 cancer-related genes revealed high coverage efficiency of both kits, proposing WES as a valid alternative to target re-sequencing. High-quality and reliable data can be successfully obtained from WES of FFPE samples starting from a relatively low amount of input gDNA, suggesting the inclusion of NGS-based tests into clinical contest. In conclusion, our analysis suggests that the WES approach could be extended to a translational research context as well as to the clinic (e.g. to study rare malignancies), where the simultaneous analysis of the whole coding region of the genome may help in the detection of cancer-linked variants.

  20. A simple test for spacetime symmetry

    NASA Astrophysics Data System (ADS)

    Houri, Tsuyoshi; Yasui, Yukinori

    2015-03-01

    This paper presents a simple method for investigating spacetime symmetry for a given metric. The method makes use of the curvature conditions that are obtained from the Killing equations. We use the solutions of the curvature conditions to compute an upper bound on the number of Killing vector fields, as well as Killing-Yano (KY) tensors and closed conformal KY tensors. We also use them in the integration of the Killing equations. By means of the method, we thoroughly investigate KY symmetry of type D vacuum solutions such as the Kerr metric in four dimensions. The method is also applied to a large variety of physical metrics in four and five dimensions.

  1. Structural phenotyping of stem cell-derived cardiomyocytes.

    PubMed

    Pasqualini, Francesco Silvio; Sheehy, Sean Paul; Agarwal, Ashutosh; Aratyn-Schaus, Yvonne; Parker, Kevin Kit

    2015-03-10

    Structural phenotyping based on classical image feature detection has been adopted to elucidate the molecular mechanisms behind genetically or pharmacologically induced changes in cell morphology. Here, we developed a set of 11 metrics to capture the increasing sarcomere organization that occurs intracellularly during striated muscle cell development. To test our metrics, we analyzed the localization of the contractile protein α-actinin in a variety of primary and stem-cell derived cardiomyocytes. Further, we combined these metrics with data mining algorithms to unbiasedly score the phenotypic maturity of human-induced pluripotent stem cell-derived cardiomyocytes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. A report on the gravitational redshift test for non-metric theories of gravitation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The frequencies of two atomic hydrogen masers and of three superconducting cavity stabilized oscillators were compared as the ensemble of oscillators was moved in the Sun's gravitational field by the rotation and orbital motion of the Earth. Metric gravitation theories predict that the gravitational redshifts of the two types of oscillators are identical, and that there should be no relative frequency shift between the oscillators; nonmetric theories, in contrast, predict a frequency shift between masers and SCSOs that is proportional to the change in solar gravitational potential experienced by the oscillators. The results are consistent with metric theories of gravitation at a level of 2%.

  3. Assessment of the quality and content of website health information about herbal remedies for menopausal symptoms.

    PubMed

    Sowter, Julie; Astin, Felicity; Dye, Louise; Marshall, Paul; Knapp, Peter

    2016-06-01

    To assess the quality, readability and coverage of website information about herbal remedies for menopausal symptoms. A purposive sample of commercial and non-commercial websites was assessed for quality (DISCERN), readability (SMOG) and information coverage. Non-parametric and parametric tests were used to explain the variability of these factors across types of websites and to assess associations between website quality and information coverage. 39 sites were assessed. Median quality and information coverage scores were 44/80 and 11/30 respectively. The median readability score was 18.7, similar to UK broadsheets. Commercial websites scored significantly lower on quality (p=0.014), but there were no statistical differences for information coverage or readability. There was a significant positive correlation between information quality and coverage scores irrespective of website provider (r=0.69, p<0.001, n=39). Overall website quality and information coverage are poor and the required reading level high. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. 40 CFR 60.85 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the following equation: E=(CQsd)/(PK) where: E=emission rate of acid mist or SO2 kg/metric ton (lb/ton... flow rate of the effluent gas, dscm/hr (dscf/hr). P=production rate of 100 percent H2SO4, metric ton/hr (ton/hr). K=conversion factor, 1000 g/kg (1.0 lb/lb). (2) Method 8 shall be used to determine the acid...

  5. 40 CFR 60.85 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the following equation: E=(CQsd)/(PK) where: E=emission rate of acid mist or SO2 kg/metric ton (lb/ton... flow rate of the effluent gas, dscm/hr (dscf/hr). P=production rate of 100 percent H2SO4, metric ton/hr (ton/hr). K=conversion factor, 1000 g/kg (1.0 lb/lb). (2) Method 8 shall be used to determine the acid...

  6. 40 CFR 60.85 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the following equation: E=(CQsd)/(PK) where: E=emission rate of acid mist or SO2 kg/metric ton (lb/ton... flow rate of the effluent gas, dscm/hr (dscf/hr). P=production rate of 100 percent H2SO4, metric ton/hr (ton/hr). K=conversion factor, 1000 g/kg (1.0 lb/lb). (2) Method 8 shall be used to determine the acid...

  7. Clinical Validation of 4-Dimensional Computed Tomography Ventilation With Pulmonary Function Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brennan, Douglas; Schubert, Leah; Diot, Quentin

    Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients withmore » pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating that 4DCT-ventilation provides a reliable assessment of lung function. Four-dimensional CT ventilation enables exciting opportunities to assess lung function and create functional avoidance radiation therapy plans. The present work provides supporting evidence for the integration of 4DCT-ventilation into clinical trials.« less

  8. On testing VLSI chips for the big Viterbi decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.

    1989-01-01

    A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.

  9. Crew Exploration Vehicle (CEV) (Orion) Occupant Protection. [Appendices Part 2

    NASA Technical Reports Server (NTRS)

    Currie-Gregg, Nancy J.; Gernhardt, Michael L.; Lawrence, Charles; Somers, Jeffrey T.

    2016-01-01

    The purpose of this study was to determine the similarity between the response of the THUMS model and the Hybrid III Anthropometric Test Device (ATD) given existing Wright-Patterson (WP) sled tests. There were four tests selected for this comparison with frontal, spinal, rear, and lateral loading. The THUMS was placed in a sled configuration that replicated the WP configuration and the recorded seat acceleration for each test was applied to model seat. Once the modeling simulations were complete, they were compared to the WP results using two methods. The first was a visual inspection of the sled test videos compared to the THUMS d3plot files. This comparison resulted in an assessment of the overall kinematics of the two results. The other comparison was a comparison of the plotted data recorded for both tests. The metrics selected for comparison were seat acceleration, belt forces, head acceleration and chest acceleration. These metrics were recorded in all WP tests and were outputs of the THUMS model. Once the comparison of the THUMS to the WP tests was complete, the THUMS model output was also examined for possible injuries in these scenarios. These outputs included metrics for injury risk to the head, neck, thorax, lumbar spine and lower extremities. The metrics to evaluate head response were peak head acceleration, HIC15, and HIC36. For the neck, N (sub ij) was calculated. The thorax response was evaluated with peak chest acceleration, the Combined Thoracic Index (CTI), sternal deflection, chest deflection, and chest acceleration- 3 ms clip. The lumbar spine response was evaluated with lumbar spine force. Finally the lower extremity response was evaluated by femur and tibia force. The results of the simulation comparisons indicate the THUMS model had a similar response to the Hybrid III dummy given the same input. The primary difference seen between the two was a more flexible response of the THUMS compared to the Hybrid III. This flexibility was most pronounced in the neck flexion, shoulder deflection and chest deflection. Due to the flexibility of the THUMS, the resulting head and chest accelerations tended to lag the Hybrid III acceleration trace and have a lower peak value. The results of the injury metric comparison identified possible injury trends between simulations. Risk of head injury was highest for the lateral simulations. The risk of chest injury was highest for the rear impact. However, neck injury risk was approximately the same for all simulations. The injury metric value for lumbar spine force was highest for the spinal impact. The leg forces were highest for the rear and lateral impacts. The results of this comparison indicate the THUMS model performs in a similar manner as the Hybrid III ATD. The differences in the responses of model and the ATD are primarily due to the flexibility of the THUMS. This flexibility of the THUMS would be a more human like response. Based on the similarity between the two models, the THUMS should be used in further testing to assess risk of injury to the occupant.

  10. Fracture Resistance of Teeth Restored with Direct and Indirect Composite Restorations

    PubMed Central

    Torabzadeh, Hassan; Ghasemi, Amir; Dabestani, Atoosa; Razmavar, Sara

    2013-01-01

    Objective: Tooth fracture is a common dental problem. By extension of cavity dimensions, the remaining tooth structure weakens and occlusal forces may cause tooth fracture. The aim of this study was to evaluate and compare the fracture resistance of teeth restored with direct and indirect composite restorations. Materials and Methods: Sixty-five sound maxillary premolar teeth were chosen and randomly divided into five groups each comprising thirteen. Fifty-two teeth received mesio-occluso-distal (MOD) cavities with 4.5mm bucco-lingual width, 4mm pulpal depth and 3mm gingival depth and were divided into the following four groups. G-1: restored with direct composite (Z-250, 3M/ESPE) with cusp coverage, G-2: restored with direct composite (Z-250) without cusp coverage, G-3: restored with direct composite (Gradia, GC-international) with cusp coverage, G-4: restored with indirect composite (Gradia, GC-International) with cusp coverage. Intact teeth were used in G-5 as control. The teeth were subjected to a compressive axial loading using a 4 mm diameter rod in a universal testing machine with 1 mm/min speed. Data were analyzed using one-way ANOVA and Tukey tests. Results: The mean fracture strength recorded was: G-1: 1148.46N±262, G-2: 791.54N±235, G-3: 880.00N±123, G-4: 800.00N±187, G-5: 1051.54N±345. ANOVA revealed significant differences between groups (p<0.05). Tukey test showed significant difference between group 1 and the other groups. There was no significant difference among other groups. Conclusion: Direct composite (Z-250) with cusp coverage is a desirable treatment for weakened teeth. Treatment with Z-250 without cusp coverage, direct and indirect Gradia with cusp coverage restored the strength of the teeth to the level of intact teeth. PMID:24910649

  11. Implementation and Operational Research: Effectiveness and Patient Acceptability of a Sexually Transmitted Infection Self-Testing Program in an HIV Care Setting.

    PubMed

    Barbee, Lindley A; Tat, Susana; Dhanireddy, Shireesha; Marrazzo, Jeanne M

    2016-06-01

    Rates of screening for bacterial sexually transmitted infections (STI) among men who have sex with men in HIV care settings remain low despite high prevalence of these infections. STI self-testing may help increase screening rates in clinical settings. We implemented an STI self-testing program at a large, urban HIV care clinic and evaluated its effectiveness and acceptability. We compared measures obtained during the first year of the STI self-testing program (Intervention Year, April 1, 2013-March 31, 2014) to Baseline Year (January 1, 2012-December 31, 2012) to determine: (1) overall clinic change in STI testing coverage and diagnostic yield and; (2) program-specific outcomes including appropriate anatomic site screening and patient-reported acceptability. Overall, testing for gonorrhea and chlamydia increased significantly between Baseline and Intervention Year, and 50% more gonococcal and 47% more chlamydial infections were detected. Syphilis testing coverage remained unchanged. Nearly 95% of 350 men who participated in the STI self-testing program completed site-specific testing appropriately based on self-reported exposures, and 92% rated their self-testing experience as "good" or "very good." STI self-testing in HIV care settings significantly increases testing coverage and detection of gonorrhea and chlamydia, and the program is acceptable to patients. Additional interventions to increase syphilis screening rates are needed.

  12. Where girls are less likely to be fully vaccinated than boys: Evidence from a rural area in Bangladesh.

    PubMed

    Hanifi, Syed Manzoor Ahmed; Ravn, Henrik; Aaby, Peter; Bhuiya, Abbas

    2018-05-31

    Immunization is one of the most successful and effective health intervention to reduce vaccine preventable diseases for children. Recently, Bangladesh has made huge progress in immunization coverage. In this study, we compared the recent immunization coverage between boys and girls in a rural area of Bangladesh. The study is based on data from Chakaria Health and Demographic Surveillance System (HDSS) of icddr,b, which covers a population of 90,000 individuals living in 16,000 households in 49 villages. We calculated the coverage of fully immunized children (FIC) for 4584 children aged 12-23 months of age between January 9, 2012 and January 19, 2016. We analyzed immunization coverage using crude FIC coverage ratio (FCR) and adjusted FCR (aFCR) from binary regression models. The dynamic of gender inequality was examined across sociodemographic and economic conditions. The adjusted female/male (F/M) FIC coverage ratios in various sociodemographic and economic categories. Among children who lived below the lower poverty line, the F/M aFCR was 0.89 (0.84-0.94) compared to 0.98 (0.95-1.00) for children from the households above lower poverty line (p = 0.003, test for interaction). For children of mothers with no high school education, the F/M aFCR was 0.94 (0.91-0.97), whereas it was 1.00 (0.96-1.04) for children of mothers who attended high school (p = 0.04, test for interaction). The F/M aFCR was 1.01 (0.96-1.06) for first born children but 0.95 (0.93-0.98) for second or higher birth order children (p = 0.04, test for interaction). Fewer girls than boys were completely vaccinated by their first birthday due to girls' lower coverage for measles vaccine. The tendency was most marked for children living below the poverty line, for children whose mothers did not attend high school, and for children of birth order two or higher. In the study setting and similar areas, sex differentials in coverage should be taken into account in ongoing immunization programmes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Dose-distance metric that predicts late rectal bleeding in patients receiving radical prostate external-beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Lee, Richard; Chan, Elisa K.; Kosztyla, Robert; Liu, Mitchell; Moiseenko, Vitali

    2012-12-01

    The relationship between rectal dose distribution and the incidence of late rectal complications following external-beam radiotherapy has been previously studied using dose-volume histograms or dose-surface histograms. However, they do not account for the spatial dose distribution. This study proposes a metric based on both surface dose and distance that can predict the incidence of rectal bleeding in prostate cancer patients treated with radical radiotherapy. One hundred and forty-four patients treated with radical radiotherapy for prostate cancer were prospectively followed to record the incidence of grade ≥2 rectal bleeding. Radiotherapy plans were used to evaluate a dose-distance metric that accounts for the dose and its spatial distribution on the rectal surface, characterized by a logistic weighting function with slope a and inflection point d0. This was compared to the effective dose obtained from dose-surface histograms, characterized by the parameter n which describes sensitivity to hot spots. The log-rank test was used to determine statistically significant (p < 0.05) cut-off values for the dose-distance metric and effective dose that predict for the occurrence of rectal bleeding. For the dose-distance metric, only d0 = 25 and 30 mm combined with a > 5 led to statistical significant cut-offs. For the effective dose metric, only values of n in the range 0.07-0.35 led to statistically significant cut-offs. The proposed dose-distance metric is a predictor of rectal bleeding in prostate cancer patients treated with radiotherapy. Both the dose-distance metric and the effective dose metric indicate that the incidence of grade ≥2 rectal bleeding is sensitive to localized damage to the rectal surface.

  14. Immunization Coverage and Medicaid Managed Care in New Mexico: A Multimethod Assessment

    PubMed Central

    Schillaci, Michael A.; Waitzkin, Howard; Carson, E. Ann; López, Cynthia M.; Boehm, Deborah A.; López, Leslie A.; Mahoney, Sheila F.

    2004-01-01

    BACKGROUND We wanted to examine the association between Medicaid managed care (MMC) and changing immunization coverage in New Mexico, a predominantly rural, poor, and multiethnic state. METHODS As part of a multimethod assessment of MMC, we studied trends in quantitative data from the National Immunization Survey (NIS) using temporal plots, Fisher’s exact test, and the Cochran-Armitage trend test. To help explain changes in immunization rates in relation to MMC, we analyzed qualitative data gathered through ethnographic observations at safety net institutions: income support (welfare) offices, community health centers, hospital emergency departments, private physicians’ offices, mental health institutions, managed care organizations, and agencies of state government. RESULTS Immunization coverage decreased significantly after implementation of MMC, from 80% in 1996 to 73% in 2001 for the 4:3:1 vaccination series (Fisher’s exact test, P = .031). New Mexico dropped in rank among states from 30th for this vaccination series in 1996 to 50th in 2001. A significant decreasing trend (Cochran-Armitage P = .025) in coverage occurred between 1996 and 2001. Findings from the ethnographic study revealed conditions that might have contributed to decreased immunization coverage: (1) reduced funding for immunizations at public health clinics, and difficulties in gaining access to MMC providers; (2) informal referrals from managed care organizations and contracting physicians to community health centers and state-run public health clinics; and (3) increased workloads and delays at community health centers, linked partly to these informal referrals for immunizations. CONCLUSIONS Medicaid reform in New Mexico did not improve immunization coverage, which declined significantly to among the lowest in the nation. Reduced funding for public health clinics and informal referrals may have contributed to this decline. These observations show how unanticipated and adverse consequences can result from policy interventions in complex insurance systems. PMID:15053278

  15. Evaluation of Standard Gear Metrics in Helicopter Flight Operation

    NASA Technical Reports Server (NTRS)

    Mosher, M.; Pryor, A. H.; Huff, E. M.

    2002-01-01

    Each false alarm made by a machine monitoring system carries a high price tag. The machine must be taken out of service, thoroughly inspected with possible disassembly, and then made ready for service. Loss of use of the machine and the efforts to inspect it are costly. In addition, if a monitoring system is prone to false alarms, the system will soon be turned off or ignored. For aircraft applications, one growing concern is that the dynamic flight environment differs from the laboratory environment where fault detection methods are developed and tested. Vibration measurements made in flight are less stationary than those made in a laboratory, or test facility, and thus a given fault detection method may produce more false alarms in flight than might be anticipated. In 1977. Stewart introduced several metrics, including FM0 and FM4, for evaluating the health of a gear. These metrics are single valued functions of the vibration signal that indicate if the signal deviates from an ideal model of the signal. FM0 is a measure of the ratio of the peak-to-peak level to the harmonic energy in the signal. FM4 is the kurtosis of the signal with the gear mesh harmonics and first order side bands removed. The underlying theory is that a vibration signal from a gear in good condition is expected to be dominated by a periodic signal at the gear mesh frequency. If one or a small number of gear teeth contain damage or faults, the signal will change, possibly showing increased amplitude, local phase changes or both near the damaged region of the gear. FM0 increases if a signal contains a local increase in amplitude. FM4 increases if a signal contains a local increase in amplitude or local phase change in a periodic signal. Over the years, other single value metrics were also introduced to detect the onset and growth of damage in gears. These various metrics have detected faults in several gear tests in experimental test rigs. Conditions in these tests have been steady state in the sense that the rpm, torque and forces on the gear have been held steady. For gears used in a dynamic environment such as that occurring in aircraft, the rpm, torque and forces on the gear are constantly changing. The authors have measured significant variation in rpm and torque in the transmissions of helicopters in controlled steady flight conditions flown by highly proficient test pilots. Statistical analyses of the data taken in flight show significant nonstationarity in the vibration measurements. These deviations from stationarity may increase false alarms in gear monitoring during aircraft flight. In the proposed paper, the authors will study vibration measurements made in flight on an AH- 1 Cobra and an OH-58C Kiowa helicopters. The primary focus will be the development of a methodology to assess the impact of nonstationarity on false alarms. Issues to be addressed include how time synchronous averages are constructed from raw data as well as how lack of stationarity effects the behavior of single value metrics. Emphasis will be placed on the occurrence of false alarms with the use of standard metrics. In order to maintain an acceptable level of false alarms in the flight environment, this study will also address the determination of appropriate threshold levels, which may need to be higher than for test rigs.

  16. [Impact of immunization measures by the Family Health Program on infant mortality from preventable diseases in Olinda, Pernambuco State, Brazil].

    PubMed

    Guimarães, Tânia Maria Rocha; Alves, João Guilherme Bezerra; Tavares, Márcia Maia Ferreira

    2009-04-01

    This article analyzes the impact of the Family Health Program (FHP) on infant health in Olinda, Pernambuco State, Brazil, evaluating immunization and infant mortality from vaccine-preventable diseases. A time-series study was conducted with data from the principal health information systems, analyzing indicators before and after implementation of the FHP in 1995. The independent variable was year of birth, related to degree of population coverage by the FHP. Three periods were analyzed: 1990-1994 (prior), 1995-1996 (implementation phase: 0 to 30% coverage), and 1997-2002 (intervention: coverage of 38.6% to 54%). Trends in the indicators were analyzed by simple linear regression, testing significance with the t test. During the implementation period there was an increase in all the vaccination coverage rates (176% BCG, 223% polio, 52% DPT, 61% measures) and a decrease in infant mortality from preventable diseases (12.7 deaths/year), even without a decrease in absolute poverty in the municipality or an increase in either coverage by the public health care system or the sewage system. Improvement in the indicators demonstrates the effectiveness of FHP actions in the municipality.

  17. Validity of smoke alarm self-report measures and reasons for over-reporting.

    PubMed

    Stepnitz, Rebecca; Shields, Wendy; McDonald, Eileen; Gielen, Andrea

    2012-10-01

    Many residential fire deaths occur in homes with no or non-functioning smoke alarms (SAs). Self-reported SA coverage is high, but studies have found varying validity for self-report measures. The authors aim to: (1) determine over-reporting of coverage, (2) describe socio-demographic correlates of over-reporting and (3) report reasons for over-reporting. The authors surveyed 603 households in a large, urban area about fire safety behaviours and then tested all SAs in the home. 23 participants who over-reported their SA coverage were telephoned and asked about why they had misreported. Full coverage was reported in 70% of households but observed in only 41%, with a low positive predictive value (54.2%) for the self-report measure. Most over-reporters assumed alarms were working because they were mounted or did not think a working alarm in a basement or attic was needed to be fully protected. If alarms cannot be tested, researchers or those counselling residents on fire safety should carefully probe self-reported coverage. Our findings support efforts to equip more homes with hard-wired or 10 year lithium battery alarms to reduce the need for user maintenance.

  18. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  19. Accelerated HIV testing for PMTCT in maternity and labour wards is vital to capture mothers at a critical point in the programme at district level in Malawi.

    PubMed

    Beltman, J J; Fitzgerald, M; Buhendwa, L; Moens, M; Massaquoi, M; Kazima, J; Alide, N; van Roosmalen, J

    2010-11-01

    Round the clock (24 hours×7 days) HIV testing is vital to maintain a high prevention of mother to child transmission (PMTCT) coverage for women delivering in district health facilities. PMTCT coverage increases when most of the pregnant women will have their HIV status tested. Therefore routine offering of HIV testing should be integrated and seen as a part of comprehensive antenatal care. For women who miss antenatal care and deliver in a health facility without having had their HIV status tested, the labour and maternity ward could still serve as other entry points.

  20. QUEST/Ada (Query Utility Environment for Software Testing of Ada): The development of a prgram analysis environment for Ada, task 1, phase 2

    NASA Technical Reports Server (NTRS)

    Brown, David B.

    1990-01-01

    The results of research and development efforts are described for Task one, Phase two of a general project entitled The Development of a Program Analysis Environment for Ada. The scope of this task includes the design and development of a prototype system for testing Ada software modules at the unit level. The system is called Query Utility Environment for Software Testing of Ada (QUEST/Ada). The prototype for condition coverage provides a platform that implements expert system interaction with program testing. The expert system can modify data in the instrument source code in order to achieve coverage goals. Given this initial prototype, it is possible to evaluate the rule base in order to develop improved rules for test case generation. The goals of Phase two are the following: (1) to continue to develop and improve the current user interface to support the other goals of this research effort (i.e., those related to improved testing efficiency and increased code reliable); (2) to develop and empirically evaluate a succession of alternative rule bases for the test case generator such that the expert system achieves coverage in a more efficient manner; and (3) to extend the concepts of the current test environment to address the issues of Ada concurrency.

Top