Sample records for performance metrics derived

  1. Best Practices Handbook: Traffic Engineering in Range Networks

    DTIC Science & Technology

    2016-03-01

    units of measurement. Measurement Methodology - A repeatable measurement technique used to derive one or more metrics of interest . Network...Performance measures - Metrics that provide quantitative or qualitative measures of the performance of systems or subsystems of interest . Performance Metric

  2. A general theory of multimetric indices and their properties

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William

    2012-01-01

    1. Stewardship of biological and ecological resources requires the ability to make integrative assessments of ecological integrity. One of the emerging methods for making such integrative assessments is multimetric indices (MMIs). These indices synthesize data, often from multiple levels of biological organization, with the goal of deriving a single index that reflects the overall effects of human disturbance. Despite the widespread use of MMIs, there is uncertainty about why this approach can be effective. An understanding of MMIs requires a quantitative theory that illustrates how the properties of candidate metrics relates to MMIs generated from those metrics. 2. We present the initial basis for such a theory by deriving the general mathematical characteristics of MMIs assembled from metrics. We then use the theory to derive quantitative answers to the following questions: Is there an optimal number of metrics to comprise an index? How does covariance among metrics affect the performance of the index derived from those metrics? And what are the criteria to decide whether a given metric will improve the performance of an index? 3. We find that the optimal number of metrics to be included in an index depends on the theoretical distribution of signal of the disturbance gradient contained in each metric. For example, if the rank-ordered parameters of a metric-disturbance regression can be described by a monotonically decreasing function, then an optimum number of metrics exists and can often be derived analytically. We derive the conditions by which adding a given metric can be expected to improve an index. 4. We find that the criterion defining such conditions depends nonlinearly of the signal of the disturbance gradient, the noise (error) of the metric and the correlation of the metric errors. Importantly, we find that correlation among metric errors increases the signal required for the metric to improve the index. 5. The theoretical framework presented in this study provides the basis for understanding the properties of MMIs. It can also be useful throughout the index construction process. Specifically, it can be used to aid understanding of the benefits and limitations of combining metrics into indices; it can inform selection/collection of candidate metrics; and it can be used directly as a decision aid in effective index construction.

  3. A protocol for the creation of useful geometric shape metrics illustrated with a newly derived geometric measure of leaf circularity.

    PubMed

    Krieger, Jonathan D

    2014-08-01

    I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.

  4. Uncooperative target-in-the-loop performance with backscattered speckle-field effects

    NASA Astrophysics Data System (ADS)

    Kansky, Jan E.; Murphy, Daniel V.

    2007-09-01

    Systems utilizing target-in-the-loop (TIL) techniques for adaptive optics phase compensation rely on a metric sensor to perform a hill climbing algorithm that maximizes the far-field Strehl ratio. In uncooperative TIL, the metric signal is derived from the light backscattered from a target. In cases where the target is illuminated with a laser with suffciently long coherence length, the potential exists for the validity of the metric sensor to be compromised by speckle-field effects. We report experimental results from a scaled laboratory designed to evaluate TIL performance in atmospheric turbulence and thermal blooming conditions where the metric sensors are influenced by varying degrees of backscatter speckle. We compare performance of several TIL configurations and metrics for cases with static speckle, and for cases with speckle fluctuations within the frequency range that the TIL system operates. The roles of metric sensor filtering and system bandwidth are discussed.

  5. Important LiDAR metrics for discriminating forest tree species in Central Europe

    NASA Astrophysics Data System (ADS)

    Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco

    2018-03-01

    Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.

  6. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    PubMed

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  7. Geographic techniques and recent applications of remote sensing to landscape-water quality studies

    USGS Publications Warehouse

    Griffith, J.A.

    2002-01-01

    This article overviews recent advances in studies of landscape-water quality relationships using remote sensing techniques. With the increasing feasibility of using remotely-sensed data, landscape-water quality studies can now be more easily performed on regional, multi-state scales. The traditional method of relating land use and land cover to water quality has been extended to include landscape pattern and other landscape information derived from satellite data. Three items are focused on in this article: 1) the increasing recognition of the importance of larger-scale studies of regional water quality that require a landscape perspective; 2) the increasing importance of remotely sensed data, such as the imagery-derived normalized difference vegetation index (NDVI) and vegetation phenological metrics derived from time-series NDVI data; and 3) landscape pattern. In some studies, using landscape pattern metrics explained some of the variation in water quality not explained by land use/cover. However, in some other studies, the NDVI metrics were even more highly correlated to certain water quality parameters than either landscape pattern metrics or land use/cover proportions. Although studies relating landscape pattern metrics to water quality have had mixed results, this recent body of work applying these landscape measures and satellite-derived metrics to water quality analysis has demonstrated their potential usefulness in monitoring watershed conditions across large regions.

  8. Multiple symbol partially coherent detection of MPSK

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1992-01-01

    It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.

  9. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  10. Vowel Acoustics in Dysarthria: Mapping to Perception

    ERIC Educational Resources Information Center

    Lansford, Kaitlin L.; Liss, Julie M.

    2014-01-01

    Purpose: The aim of the present report was to explore whether vowel metrics, demonstrated to distinguish dysarthric and healthy speech in a companion article (Lansford & Liss, 2014), are able to predict human perceptual performance. Method: Vowel metrics derived from vowels embedded in phrases produced by 45 speakers with dysarthria were…

  11. Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions

    PubMed Central

    Kanjirathumkal, Cibile K.; Mohammed, Sameer S.

    2014-01-01

    Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175

  12. New Decentralized Algorithms for Spacecraft Formation Control Based on a Cyclic Approach

    DTIC Science & Technology

    2010-06-01

    space framework. As metric of performance, a common quadratic norm that weights the performance error and the control effort is traded with the cost...R = DTD, then the metric of interest is (’J)",,, the square of the 2-norm from input w to output z. Given a system G with state space description A ... spaced logarithmic spiral formation. These results are derived for

  13. Shipboard Electrical System Modeling for Early-Stage Design Space Exploration

    DTIC Science & Technology

    2013-04-01

    method is demonstrated in several system studies. I. INTRODUCTION The integrated engineering plant ( IEP ) of an electric warship can be viewed as a...which it must operate [2], [4]. The desired IEP design should be dependable [5]. The operability metric has previously been defined as a measure of...the performance of an IEP during a specific scenario [2]. Dependability metrics have been derived from the operability metric as measures of the IEP

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selectionmore » is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection. Analytical insights lead to valid guiding principles on fusion set size design.« less

  15. Objective measurement of complex multimodal and multidimensional display formats: a common metric for predicting format effectiveness

    NASA Astrophysics Data System (ADS)

    Marshak, William P.; Darkow, David J.; Wesler, Mary M.; Fix, Edward L.

    2000-08-01

    Computer-based display designers have more sensory modes and more dimensions within sensory modality with which to encode information in a user interface than ever before. This elaboration of information presentation has made measurement of display/format effectiveness and predicting display/format performance extremely difficult. A multivariate method has been devised which isolates critical information, physically measures its signal strength, and compares it with other elements of the display, which act like background noise. This common Metric relates signal-to-noise ratios (SNRs) within each stimulus dimension, then combines SNRs among display modes, dimensions and cognitive factors can predict display format effectiveness. Examples with their Common Metric assessment and validation in performance will be presented along with the derivation of the metric. Implications of the Common Metric in display design and evaluation will be discussed.

  16. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  17. Detecting understory plant invasion in urban forests using LiDAR

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Davis, Amy J.; Meentemeyer, Ross K.

    2015-06-01

    Light detection and ranging (LiDAR) data are increasingly used to measure structural characteristics of urban forests but are rarely used to detect the growing problem of exotic understory plant invaders. We explored the merits of using LiDAR-derived metrics alone and through integration with spectral data to detect the spatial distribution of the exotic understory plant Ligustrum sinense, a rapidly spreading invader in the urbanizing region of Charlotte, North Carolina, USA. We analyzed regional-scale L. sinense occurrence data collected over the course of three years with LiDAR-derived metrics of forest structure that were categorized into the following groups: overstory, understory, topography, and overall vegetation characteristics, and IKONOS spectral features - optical. Using random forest (RF) and logistic regression (LR) classifiers, we assessed the relative contributions of LiDAR and IKONOS derived variables to the detection of L. sinense. We compared the top performing models developed for a smaller, nested experimental extent using RF and LR classifiers, and used the best overall model to produce a predictive map of the spatial distribution of L. sinense across our country-wide study extent. RF classification of LiDAR-derived topography metrics produced the highest mapping accuracy estimates, outperforming IKONOS data by 17.5% and the integration of LiDAR and IKONOS data by 5.3%. The top performing model from the RF classifier produced the highest kappa of 64.8%, improving on the parsimonious LR model kappa by 31.1% with a moderate gain of 6.2% over the county extent model. Our results demonstrate the superiority of LiDAR-derived metrics over spectral data and fusion of LiDAR and spectral data for accurately mapping the spatial distribution of the forest understory invader L. sinense.

  18. Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living

    PubMed Central

    Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.

    2015-01-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612

  19. Assessing upper extremity motor function in practice of virtual activities of daily living.

    PubMed

    Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T

    2015-03-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.

  20. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  1. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  2. Parameter-space metric of semicoherent searches for continuous gravitational waves

    NASA Astrophysics Data System (ADS)

    Pletsch, Holger J.

    2010-08-01

    Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.

  3. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  4. Metric learning for automatic sleep stage classification.

    PubMed

    Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung

    2013-01-01

    We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.

  5. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

  6. Thermodynamic metrics and optimal paths.

    PubMed

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  7. Estimation of the fraction of absorbed photosynthetically active radiation (fPAR) in maize canopies using LiDAR data and hyperspectral imagery.

    PubMed

    Qin, Haiming; Wang, Cheng; Zhao, Kaiguang; Xi, Xiaohuan

    2018-01-01

    Accurate estimation of the fraction of absorbed photosynthetically active radiation (fPAR) for maize canopies are important for maize growth monitoring and yield estimation. The goal of this study is to explore the potential of using airborne LiDAR and hyperspectral data to better estimate maize fPAR. This study focuses on estimating maize fPAR from (1) height and coverage metrics derived from airborne LiDAR point cloud data; (2) vegetation indices derived from hyperspectral imagery; and (3) a combination of these metrics. Pearson correlation analyses were conducted to evaluate the relationships among LiDAR metrics, hyperspectral metrics, and field-measured fPAR values. Then, multiple linear regression (MLR) models were developed using these metrics. Results showed that (1) LiDAR height and coverage metrics provided good explanatory power (i.e., R2 = 0.81); (2) hyperspectral vegetation indices provided moderate interpretability (i.e., R2 = 0.50); and (3) the combination of LiDAR metrics and hyperspectral metrics improved the LiDAR model (i.e., R2 = 0.88). These results indicate that LiDAR model seems to offer a reliable method for estimating maize fPAR at a high spatial resolution and it can be used for farmland management. Combining LiDAR and hyperspectral metrics led to better performance of maize fPAR estimation than LiDAR or hyperspectral metrics alone, which means that maize fPAR retrieval can benefit from the complementary nature of LiDAR-detected canopy structure characteristics and hyperspectral-captured vegetation spectral information.

  8. A Simple, Powerful Method for Optimal Guidance of Spacecraft Formations

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2005-01-01

    One of the most interesting and challenging aspects of formation guidance law design is the coupling of the orbit design and the science return. The analyst s role is more complicated than simply to design the formation geometry and evolution. He or she is also involved in designing a significant portion of the science instrument itself. The effectiveness of the formation as a science instrument is intimately coupled with the relative geoniet,ry and evolution of the collection of spacecraft. Therefore, the science return can be maximized by optimizing the orbit design according to a performance metric relevant to the science mission goals. In this work, we present a simple method for optimal formation guidance that is applicable to missions whose performance metric, requirements, and constraints can be cast as functions that are explicitly dependent upon the orbit states and spacecraft relative positions and velocities. We present a general form for the cost and constraint functions, and derive their semi-analytic gradients with respect to the formation initial conditions. The gradients are broken down into two types. The first type are gradients of the mission specific performance metric with respect to formation geometry. The second type are derivatives of the formation geometry with respect to the orbit initial conditions. The fact that these two types of derivatives appear separately allows us to derive and implement a general framework that requires minimal modification to be applied to different missions or mission phases. To illustrate the applicability of the approach, we conclude with applications to twc missims: the Magnetospheric Mu!tiscale mission (MMS), a,nd the TJaser Interferometer Space Antenna (LISA).

  9. A Simple, Powerful Method for Optimal Guidance of Spacecraft Formations

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2006-01-01

    One of the most interesting and challenging aspects of formation guidance law design is the coupling of the orbit design and the science return. The analyst's role is more complicated than simply to design the formation geometry and evolution. He or she is also involved in designing a significant portion of the science instrument itself. The effectiveness of the formation as a science instrument is intimately coupled with the relative geometry and evolution of the collection of spacecraft. Therefore, the science return can be maximized by optimizing the orbit design according to a performance metric relevant to the science mission goals. In this work, we present a simple method for optimal formation guidance that is applicable to missions whose performance metric, requirements, and constraints can be cast as functions that are explicitly dependent upon the orbit states and spacecraft relative positions and velocities. We present a general form for the cost and constraint functions, and derive their semi-analytic gradients with respect to the formation initial conditions. The gradients are broken down into two types. The first type are gradients of the mission specific performance metric with respect to formation geometry. The second type are derivatives of the formation geometry with respect to the orbit initial conditions. The fact that these two types of derivatives appear separately allows us to derive and implement a general framework that requires minimal modification to be applied to different missions or mission phases. To illustrate the applicability of the approach, we conclude with applications to two missions: the Magnetospheric Multiscale mission (MMS) , and the Laser Interferometer Space Antenna (LISA).

  10. Assessing technical performance in differential gene expression experiments with external spike-in RNA control ratio mixtures.

    PubMed

    Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc

    2014-09-25

    There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.

  11. On Information Metrics for Spatial Coding.

    PubMed

    Souza, Bryan C; Pavão, Rodrigo; Belchior, Hindiael; Tort, Adriano B L

    2018-04-01

    The hippocampal formation is involved in navigation, and its neuronal activity exhibits a variety of spatial correlates (e.g., place cells, grid cells). The quantification of the information encoded by spikes has been standard procedure to identify which cells have spatial correlates. For place cells, most of the established metrics derive from Shannon's mutual information (Shannon, 1948), and convey information rate in bits/s or bits/spike (Skaggs et al., 1993, 1996). Despite their widespread use, the performance of these metrics in relation to the original mutual information metric has never been investigated. In this work, using simulated and real data, we find that the current information metrics correlate less with the accuracy of spatial decoding than the original mutual information metric. We also find that the top informative cells may differ among metrics, and show a surrogate-based normalization that yields comparable spatial information estimates. Since different information metrics may identify different neuronal populations, we discuss current and alternative definitions of spatially informative cells, which affect the metric choice. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  13. Fitting the curve in Excel®: Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NASA Astrophysics Data System (ADS)

    McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.

    2017-03-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.

  14. Your Brain on the Movies: A Computational Approach for Predicting Box-office Performance from Viewer’s Brain Responses to Movie Trailers

    PubMed Central

    Christoforou, Christoforos; Papadopoulos, Timothy C.; Constantinidou, Fofi; Theodorou, Maria

    2017-01-01

    The ability to anticipate the population-wide response of a target audience to a new movie or TV series, before its release, is critical to the film industry. Equally important is the ability to understand the underlying factors that drive or characterize viewer’s decision to watch a movie. Traditional approaches (which involve pilot test-screenings, questionnaires, and focus groups) have reached a plateau in their ability to predict the population-wide responses to new movies. In this study, we develop a novel computational approach for extracting neurophysiological electroencephalography (EEG) and eye-gaze based metrics to predict the population-wide behavior of movie goers. We further, explore the connection of the derived metrics to the underlying cognitive processes that might drive moviegoers’ decision to watch a movie. Towards that, we recorded neural activity—through the use of EEG—and eye-gaze activity from a group of naive individuals while watching movie trailers of pre-selected movies for which the population-wide preference is captured by the movie’s market performance (i.e., box-office ticket sales in the US). Our findings show that the neural based metrics, derived using the proposed methodology, carry predictive information about the broader audience decisions to watch a movie, above and beyond traditional methods. In particular, neural metrics are shown to predict up to 72% of the variance of the films’ performance at their premiere and up to 67% of the variance at following weekends; which corresponds to a 23-fold increase in prediction accuracy compared to current neurophysiological or traditional methods. We discuss our findings in the context of existing literature and hypothesize on the possible connection of the derived neurophysiological metrics to cognitive states of focused attention, the encoding of long-term memory, and the synchronization of different components of the brain’s rewards network. Beyond the practical implication in predicting and understanding the behavior of moviegoers, the proposed approach can facilitate the use of video stimuli in neuroscience research; such as the study of individual differences in attention-deficit disorders, and the study of desensitization to media violence. PMID:29311885

  15. Your Brain on the Movies: A Computational Approach for Predicting Box-office Performance from Viewer's Brain Responses to Movie Trailers.

    PubMed

    Christoforou, Christoforos; Papadopoulos, Timothy C; Constantinidou, Fofi; Theodorou, Maria

    2017-01-01

    The ability to anticipate the population-wide response of a target audience to a new movie or TV series, before its release, is critical to the film industry. Equally important is the ability to understand the underlying factors that drive or characterize viewer's decision to watch a movie. Traditional approaches (which involve pilot test-screenings, questionnaires, and focus groups) have reached a plateau in their ability to predict the population-wide responses to new movies. In this study, we develop a novel computational approach for extracting neurophysiological electroencephalography (EEG) and eye-gaze based metrics to predict the population-wide behavior of movie goers. We further, explore the connection of the derived metrics to the underlying cognitive processes that might drive moviegoers' decision to watch a movie. Towards that, we recorded neural activity-through the use of EEG-and eye-gaze activity from a group of naive individuals while watching movie trailers of pre-selected movies for which the population-wide preference is captured by the movie's market performance (i.e., box-office ticket sales in the US). Our findings show that the neural based metrics, derived using the proposed methodology, carry predictive information about the broader audience decisions to watch a movie, above and beyond traditional methods. In particular, neural metrics are shown to predict up to 72% of the variance of the films' performance at their premiere and up to 67% of the variance at following weekends; which corresponds to a 23-fold increase in prediction accuracy compared to current neurophysiological or traditional methods. We discuss our findings in the context of existing literature and hypothesize on the possible connection of the derived neurophysiological metrics to cognitive states of focused attention, the encoding of long-term memory, and the synchronization of different components of the brain's rewards network. Beyond the practical implication in predicting and understanding the behavior of moviegoers, the proposed approach can facilitate the use of video stimuli in neuroscience research; such as the study of individual differences in attention-deficit disorders, and the study of desensitization to media violence.

  16. Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.

    PubMed

    Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P

    2018-05-15

    The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.

  17. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  18. Estimating Carbon Flux Phenology with Satellite-Derived Land Surface Phenology and Climate Drivers for Different Biomes: A Synthesis of AmeriFlux Observations

    PubMed Central

    Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; Liu, Jianhong; Mou, Minjie

    2013-01-01

    Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this study, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptake (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. This methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data. PMID:24386441

  19. Estimating Carbon Flux Phenology with Satellite-Derived Land Surface Phenology and Climate Drivers for Different Biomes: A Synthesis of AmeriFlux Observations

    DOE PAGES

    Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; ...

    2013-12-27

    Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this paper, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptakemore » (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. In conclusion, this methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data.« less

  20. Gap-metric-based robustness analysis of nonlinear systems with full and partial feedback linearisation

    NASA Astrophysics Data System (ADS)

    Al-Gburi, A.; Freeman, C. T.; French, M. C.

    2018-06-01

    This paper uses gap metric analysis to derive robustness and performance margins for feedback linearising controllers. Distinct from previous robustness analysis, it incorporates the case of output unstructured uncertainties, and is shown to yield general stability conditions which can be applied to both stable and unstable plants. It then expands on existing feedback linearising control schemes by introducing a more general robust feedback linearising control design which classifies the system nonlinearity into stable and unstable components and cancels only the unstable plant nonlinearities. This is done in order to preserve the stabilising action of the inherently stabilising nonlinearities. Robustness and performance margins are derived for this control scheme, and are expressed in terms of bounds on the plant nonlinearities and the accuracy of the cancellation of the unstable plant nonlinearity by the controller. Case studies then confirm reduced conservatism compared with standard methods.

  1. Storage Costs and Heuristics Interact to Produce Patterns of Aphasic Sentence Comprehension Performance

    PubMed Central

    Clark, David Glenn

    2012-01-01

    Background: Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. Method: A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. Results: All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent–Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. Conclusion: DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies. PMID:22590462

  2. Storage costs and heuristics interact to produce patterns of aphasic sentence comprehension performance.

    PubMed

    Clark, David Glenn

    2012-01-01

    Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent-Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies.

  3. Derivation of a Levelized Cost of Coating (LCOC) metric for evaluation of solar selective absorber materials

    DOE PAGES

    Ho, C. K.; Pacheco, J. E.

    2015-06-05

    A new metric, the Levelized Cost of Coating (LCOC), is derived in this paper to evaluate and compare alternative solar selective absorber coatings against a baseline coating (Pyromark 2500). In contrast to previous metrics that focused only on the optical performance of the coating, the LCOC includes costs, durability, and optical performance for more comprehensive comparisons among candidate materials. The LCOC is defined as the annualized marginal cost of the coating to produce a baseline annual thermal energy production. Costs include the cost of materials and labor for initial application and reapplication of the coating, as well as the costmore » of additional or fewer heliostats to yield the same annual thermal energy production as the baseline coating. Results show that important factors impacting the LCOC include the initial solar absorptance, thermal emittance, reapplication interval, degradation rate, reapplication cost, and downtime during reapplication. The LCOC can also be used to determine the optimal reapplication interval to minimize the levelized cost of energy production. As a result, similar methods can be applied more generally to determine the levelized cost of component for other applications and systems.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivak, David; Crooks, Gavin

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  5. Evaluation of image quality metrics for the prediction of subjective best focus.

    PubMed

    Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S

    2010-03-01

    Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.

  6. Ability of LANDSAT-8 Oli Derived Texture Metrics in Estimating Aboveground Carbon Stocks of Coppice Oak Forests

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sohrabi, H.

    2016-06-01

    The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics were able to explain about half of the variation in aboveground carbon stocks. These results demonstrated that Landsat 8 derived texture metrics can be applied for mapping aboveground carbon stocks of coppice Oak Forests in large areas.

  7. Vehicle Integrated Prognostic Reasoner (VIPR) Metric Report

    NASA Technical Reports Server (NTRS)

    Cornhill, Dennis; Bharadwaj, Raj; Mylaraswamy, Dinkar

    2013-01-01

    This document outlines a set of metrics for evaluating the diagnostic and prognostic schemes developed for the Vehicle Integrated Prognostic Reasoner (VIPR), a system-level reasoner that encompasses the multiple levels of large, complex systems such as those for aircraft and spacecraft. VIPR health managers are organized hierarchically and operate together to derive diagnostic and prognostic inferences from symptoms and conditions reported by a set of diagnostic and prognostic monitors. For layered reasoners such as VIPR, the overall performance cannot be evaluated by metrics solely directed toward timely detection and accuracy of estimation of the faults in individual components. Among other factors, overall vehicle reasoner performance is governed by the effectiveness of the communication schemes between monitors and reasoners in the architecture, and the ability to propagate and fuse relevant information to make accurate, consistent, and timely predictions at different levels of the reasoner hierarchy. We outline an extended set of diagnostic and prognostics metrics that can be broadly categorized as evaluation measures for diagnostic coverage, prognostic coverage, accuracy of inferences, latency in making inferences, computational cost, and sensitivity to different fault and degradation conditions. We report metrics from Monte Carlo experiments using two variations of an aircraft reference model that supported both flat and hierarchical reasoning.

  8. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  9. Foresters' Metric Conversions program (version 1.0). [Computer program

    Treesearch

    Jefferson A. Palmer

    1999-01-01

    The conversion of scientific measurements has become commonplace in the fields of - engineering, research, and forestry. Foresters? Metric Conversions is a Windows-based computer program that quickly converts user-defined measurements from English to metric and from metric to English. Foresters? Metric Conversions was derived from the publication "Metric...

  10. Performance assessment of static lead-lag feedforward controllers for disturbance rejection in PID control loops.

    PubMed

    Yu, Zhenpeng; Wang, Jiandong

    2016-09-01

    This paper assesses the performance of feedforward controllers for disturbance rejection in univariate feedback plus feedforward control loops. The structures of feedback and feedforward controllers are confined to proportional-integral-derivative and static-lead-lag forms, respectively, and the effects of feedback controllers are not considered. The integral squared error (ISE) and total squared variation (TSV) are used as performance metrics. A performance index is formulated by comparing the current ISE and TSV metrics to their own lower bounds as performance benchmarks. A controller performance assessment (CPA) method is proposed to calculate the performance index from measurements. The proposed CPA method resolves two critical limitations in the existing CPA methods, in order to be consistent with industrial scenarios. Numerical and experimental examples illustrate the effectiveness of the obtained results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Performance analysis of three-dimensional ridge acquisition from live finger and palm surface scans

    NASA Astrophysics Data System (ADS)

    Fatehpuria, Abhishika; Lau, Daniel L.; Yalla, Veeraganesh; Hassebrook, Laurence G.

    2007-04-01

    Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating scanner performance. Specifically, we use some image software components developed by the National Institute of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans and the quality of the acquired scans is quantified using the metrics.

  12. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  13. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  14. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Separating Movement and Gravity Components in an Acceleration Signal and Implications for the Assessment of Human Daily Physical Activity

    PubMed Central

    van Hees, Vincent T.; Gorzelniak, Lukas; Dean León, Emmanuel Carlos; Eder, Martin; Pias, Marcelo; Taherian, Salman; Ekelund, Ulf; Renström, Frida; Franks, Paul W.; Horsch, Alexander; Brage, Søren

    2013-01-01

    Introduction Human body acceleration is often used as an indicator of daily physical activity in epidemiological research. Raw acceleration signals contain three basic components: movement, gravity, and noise. Separation of these becomes increasingly difficult during rotational movements. We aimed to evaluate five different methods (metrics) of processing acceleration signals on their ability to remove the gravitational component of acceleration during standardised mechanical movements and the implications for human daily physical activity assessment. Methods An industrial robot rotated accelerometers in the vertical plane. Radius, frequency, and angular range of motion were systematically varied. Three metrics (Euclidian norm minus one [ENMO], Euclidian norm of the high-pass filtered signals [HFEN], and HFEN plus Euclidean norm of low-pass filtered signals minus 1 g [HFEN+]) were derived for each experimental condition and compared against the reference acceleration (forward kinematics) of the robot arm. We then compared metrics derived from human acceleration signals from the wrist and hip in 97 adults (22–65 yr), and wrist in 63 women (20–35 yr) in whom daily activity-related energy expenditure (PAEE) was available. Results In the robot experiment, HFEN+ had lowest error during (vertical plane) rotations at an oscillating frequency higher than the filter cut-off frequency while for lower frequencies ENMO performed better. In the human experiments, metrics HFEN and ENMO on hip were most discrepant (within- and between-individual explained variance of 0.90 and 0.46, respectively). ENMO, HFEN and HFEN+ explained 34%, 30% and 36% of the variance in daily PAEE, respectively, compared to 26% for a metric which did not attempt to remove the gravitational component (metric EN). Conclusion In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice of metric explains different degrees of variance in daily human physical activity. PMID:23626718

  16. Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physical activity.

    PubMed

    van Hees, Vincent T; Gorzelniak, Lukas; Dean León, Emmanuel Carlos; Eder, Martin; Pias, Marcelo; Taherian, Salman; Ekelund, Ulf; Renström, Frida; Franks, Paul W; Horsch, Alexander; Brage, Søren

    2013-01-01

    Human body acceleration is often used as an indicator of daily physical activity in epidemiological research. Raw acceleration signals contain three basic components: movement, gravity, and noise. Separation of these becomes increasingly difficult during rotational movements. We aimed to evaluate five different methods (metrics) of processing acceleration signals on their ability to remove the gravitational component of acceleration during standardised mechanical movements and the implications for human daily physical activity assessment. An industrial robot rotated accelerometers in the vertical plane. Radius, frequency, and angular range of motion were systematically varied. Three metrics (Euclidian norm minus one [ENMO], Euclidian norm of the high-pass filtered signals [HFEN], and HFEN plus Euclidean norm of low-pass filtered signals minus 1 g [HFEN+]) were derived for each experimental condition and compared against the reference acceleration (forward kinematics) of the robot arm. We then compared metrics derived from human acceleration signals from the wrist and hip in 97 adults (22-65 yr), and wrist in 63 women (20-35 yr) in whom daily activity-related energy expenditure (PAEE) was available. In the robot experiment, HFEN+ had lowest error during (vertical plane) rotations at an oscillating frequency higher than the filter cut-off frequency while for lower frequencies ENMO performed better. In the human experiments, metrics HFEN and ENMO on hip were most discrepant (within- and between-individual explained variance of 0.90 and 0.46, respectively). ENMO, HFEN and HFEN+ explained 34%, 30% and 36% of the variance in daily PAEE, respectively, compared to 26% for a metric which did not attempt to remove the gravitational component (metric EN). In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice of metric explains different degrees of variance in daily human physical activity.

  17. On the relationship between tumour growth rate and survival in non-small cell lung cancer.

    PubMed

    Mistry, Hitesh B

    2017-01-01

    A recurrent question within oncology drug development is predicting phase III outcome for a new treatment using early clinical data. One approach to tackle this problem has been to derive metrics from mathematical models that describe tumour size dynamics termed re-growth rate and time to tumour re-growth. They have shown to be strong predictors of overall survival in numerous studies but there is debate about how these metrics are derived and if they are more predictive than empirical end-points. This work explores the issues raised in using model-derived metric as predictors for survival analyses. Re-growth rate and time to tumour re-growth were calculated for three large clinical studies by forward and reverse alignment. The latter involves re-aligning patients to their time of progression. Hence, it accounts for the time taken to estimate re-growth rate and time to tumour re-growth but also assesses if these predictors correlate to survival from the time of progression. I found that neither re-growth rate nor time to tumour re-growth correlated to survival using reverse alignment. This suggests that the dynamics of tumours up until disease progression has no relationship to survival post progression. For prediction of a phase III trial I found the metrics performed no better than empirical end-points. These results highlight that care must be taken when relating dynamics of tumour imaging to survival and that bench-marking new approaches to existing ones is essential.

  18. Evaluating CMIP5 Simulations of Historical Continental Climate with Koeppen Bioclimatic Metrics

    NASA Astrophysics Data System (ADS)

    Phillips, T. J.; Bonfils, C.

    2013-12-01

    The classic Koeppen bioclimatic classification scheme associates generic vegetation types (e.g. grassland, tundra, broadleaf or evergreen forests, etc.) with regional climate zones defined by their annual cycles of continental temperature (T) and precipitation (P), considered together. The locations or areas of Koeppen vegetation types derived from observational data thus can provide concise metrical standards for simultaneously evaluating climate simulations of T and P in naturally defined regions. The CMIP5 models' collective ability to correctly represent two variables that are critically important for living organisms at regional scales is therefore central to this evaluation. For this study, 14 Koeppen vegetation types are derived from annual-cycle climatologies of T and P in some 3 dozen CMIP5 simulations of the 1980-1999 period. Metrics for evaluating the ability of the CMIP5 models to simulate the correct locations and areas of each vegetation type, as well as measures of overall model performance, also are developed. It is found that the CMIP5 models are generally most deficient in simulating: 1) climates of drier Koeppen zones (e.g. desert, savanna, grassland, steppe vegetation types) located in the southwestern U.S. and Mexico, eastern Europe, southern Africa, and central Australia; 2) climates of regions such as central Asia and western South America where topography plays a key role. Details of regional T or P biases in selected simulations that exemplify general model performance problems also will be presented. Acknowledgments: This work was funded by the U.S. Department of Energy Office of Science and was performed at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Map of Koeppen vegetation types derived from observed T and P.

  19. New metric for optimizing Continuous Loop Averaging Deconvolution (CLAD) sequences under the 1/f noise model

    PubMed Central

    Peng, Xian; Yuan, Han; Chen, Wufan; Ding, Lei

    2017-01-01

    Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic. PMID:28414803

  20. A Sensor-Independent Gust Hazard Metric

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2001-01-01

    A procedure for calculating an intuitive hazard metric for gust effects on airplanes is described. The hazard metric is for use by pilots and is intended to replace subjective pilot reports (PIREPs) of the turbulence level. The hazard metric is composed of three numbers: the first describes the average airplane response to the turbulence, the second describes the positive peak airplane response to the gusts, and the third describes the negative peak airplane response to the gusts. The hazard metric is derived from any time history of vertical gust measurements and is thus independent of the sensor making the gust measurements. The metric is demonstrated for one simulated airplane encountering different types of gusts including those derived from flight data recorder measurements of actual accidents. The simulated airplane responses to the gusts compare favorably with the hazard metric.

  1. Covariant Conformal Decomposition of Einstein Equations

    NASA Astrophysics Data System (ADS)

    Gourgoulhon, E.; Novak, J.

    It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.

  2. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    PubMed

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Non-minimal derivative couplings of the composite metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heisenberg, Lavinia, E-mail: laviniah@kth.se

    2015-11-01

    In the context of massive gravity, bi-gravity and multi-gravity non-minimal matter couplings via a specific composite effective metric were investigated recently. Even if these couplings generically reintroduce the Boulware-Deser ghost, this composite metric is unique in the sense that the ghost reemerges only beyond the decoupling limit and the matter quantum loop corrections do not detune the potential interactions. We consider non-minimal derivative couplings of the composite metric to matter fields for a specific subclass of Horndeski scalar-tensor interactions. We first explore these couplings in the mini-superspace and investigate in which scenario the ghost remains absent. We further study thesemore » non-minimal derivative couplings in the decoupling-limit of the theory and show that the equation of motion for the helicity-0 mode remains second order in derivatives. Finally, we discuss preliminary implications for cosmology.« less

  4. Non-minimal derivative couplings of the composite metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heisenberg, Lavinia; Department of Physics & The Oskar Klein Centre,AlbaNova University Centre, 10691 Stockholm

    2015-11-04

    In the context of massive gravity, bi-gravity and multi-gravity non-minimal matter couplings via a specific composite effective metric were investigated recently. Even if these couplings generically reintroduce the Boulware-Deser ghost, this composite metric is unique in the sense that the ghost reemerges only beyond the decoupling limit and the matter quantum loop corrections do not detune the potential interactions. We consider non-minimal derivative couplings of the composite metric to matter fields for a specific subclass of Horndeski scalar-tensor interactions. We first explore these couplings in the mini-superspace and investigate in which scenario the ghost remains absent. We further study thesemore » non-minimal derivative couplings in the decoupling-limit of the theory and show that the equation of motion for the helicity-0 mode remains second order in derivatives. Finally, we discuss preliminary implications for cosmology.« less

  5. Combining control input with flight path data to evaluate pilot performance in transport aircraft.

    PubMed

    Ebbatson, Matt; Harris, Don; Huddlestone, John; Sears, Rodney

    2008-11-01

    When deriving an objective assessment of piloting performance from flight data records, it is common to employ metrics which purely evaluate errors in flight path parameters. The adequacy of pilot performance is evaluated from the flight path of the aircraft. However, in large jet transport aircraft these measures may be insensitive and require supplementing with frequency-based measures of control input parameters. Flight path and control input data were collected from pilots undertaking a jet transport aircraft conversion course during a series of symmetric and asymmetric approaches in a flight simulator. The flight path data were analyzed for deviations around the optimum flight path while flying an instrument landing approach. Manipulation of the flight controls was subject to analysis using a series of power spectral density measures. The flight path metrics showed no significant differences in performance between the symmetric and asymmetric approaches. However, control input frequency domain measures revealed that the pilots employed highly different control strategies in the pitch and yaw axes. The results demonstrate that to evaluate pilot performance fully in large aircraft, it is necessary to employ performance metrics targeted at both the outer control loop (flight path) and the inner control loop (flight control) parameters in parallel, evaluating both the product and process of a pilot's performance.

  6. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  7. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.

    PubMed

    Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-09-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).

  8. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  9. Estimating regional wheat yield from the shape of decreasing curves of green area index temporal profiles retrieved from MODIS data

    NASA Astrophysics Data System (ADS)

    Kouadio, Louis; Duveiller, Grégory; Djaby, Bakary; El Jarroudi, Moussa; Defourny, Pierre; Tychon, Bernard

    2012-08-01

    Earth observation data, owing to their synoptic, timely and repetitive coverage, have been recognized as a valuable tool for crop monitoring at different levels. At the field level, the close correlation between green leaf area (GLA) during maturation and grain yield in wheat revealed that the onset and rate of senescence appeared to be important factors for determining wheat grain yield. Our study sought to explore a simple approach for wheat yield forecasting at the regional level, based on metrics derived from the senescence phase of the green area index (GAI) retrieved from remote sensing data. This study took advantage of recent methodological improvements in which imagery with high revisit frequency but coarse spatial resolution can be exploited to derive crop-specific GAI time series by selecting pixels whose ground-projected instantaneous field of view is dominated by the target crop: winter wheat. A logistic function was used to characterize the GAI senescence phase and derive the metrics of this phase. Four regression-based models involving these metrics (i.e., the maximum GAI value, the senescence rate and the thermal time taken to reach 50% of the green surface in the senescent phase) were related to official wheat yield data. The performances of such models at this regional scale showed that final yield could be estimated with an RMSE of 0.57 ton ha-1, representing about 7% as relative RMSE. Such an approach may be considered as a first yield estimate that could be performed in order to provide better integrated yield assessments in operational systems.

  10. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  11. Comparing Amide-Forming Reactions Using Green Chemistry Metrics in an Undergraduate Organic Laboratory

    ERIC Educational Resources Information Center

    Fennie, Michael W.; Roth, Jessica M.

    2016-01-01

    In this laboratory experiment, upper-division undergraduate chemistry and biochemistry majors investigate amide-bond-forming reactions from a green chemistry perspective. Using hydrocinnamic acid and benzylamine as reactants, students perform three types of amide-forming reactions: an acid chloride derivative route; a coupling reagent promoted…

  12. Human-centric predictive model of task difficulty for human-in-the-loop control tasks

    PubMed Central

    Majewicz Fey, Ann

    2018-01-01

    Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301

  13. Using community-level metrics to monitor the effects of marine protected areas on biodiversity.

    PubMed

    Soykan, Candan U; Lewison, Rebecca L

    2015-06-01

    Marine protected areas (MPAs) are used to protect species, communities, and their associated habitats, among other goals. Measuring MPA efficacy can be challenging, however, particularly when considering responses at the community level. We gathered 36 abundance and 14 biomass data sets on fish assemblages and used meta-analysis to evaluate the ability of 22 distinct community diversity metrics to detect differences in community structure between MPAs and nearby control sites. We also considered the effects of 6 covariates-MPA size and age, MPA size and age interaction, latitude, total species richness, and level of protection-on each metric. Some common metrics, such as species richness and Shannon diversity, did not differ consistently between MPA and control sites, whereas other metrics, such as total abundance and biomass, were consistently different across studies. Metric responses derived from the biomass data sets were more consistent than those based on the abundance data sets, suggesting that community-level biomass differs more predictably than abundance between MPA and control sites. Covariate analyses indicated that level of protection, latitude, MPA size, and the interaction between MPA size and age affect metric performance. These results highlight a handful of metrics, several of which are little known, that could be used to meet the increasing demand for community-level indicators of MPA effectiveness. © 2015 Society for Conservation Biology.

  14. Defining Sustainability Metric Targets in an Institutional Setting

    ERIC Educational Resources Information Center

    Rauch, Jason N.; Newman, Julie

    2009-01-01

    Purpose: The purpose of this paper is to expand on the development of university and college sustainability metrics by implementing an adaptable metric target strategy. Design/methodology/approach: A combined qualitative and quantitative methodology is derived that both defines what a sustainable metric target might be and describes the path a…

  15. Development of a multimetric index for integrated assessment of salt marsh ecosystem condition

    USGS Publications Warehouse

    Nagel, Jessica L.; Neckles, Hilary A.; Guntenspergen, Glenn R.; Rocks, Erika N.; Schoolmaster, Donald; Grace, James B.; Skidds, Dennis; Stevens, Sara

    2018-01-01

    Tools for assessing and communicating salt marsh condition are essential to guide decisions aimed at maintaining or restoring ecosystem integrity and services. Multimetric indices (MMIs) are increasingly used to provide integrated assessments of ecosystem condition. We employed a theory-based approach that considers the multivariate relationship of metrics with human disturbance to construct a salt marsh MMI for five National Parks in the northeastern USA. We quantified the degree of human disturbance for each marsh using the first principal component score from a principal components analysis of physical, chemical, and land use stressors. We then applied a metric selection algorithm to different combinations of about 45 vegetation and nekton metrics (e.g., species abundance, species richness, and ecological and functional classifications) derived from multi-year monitoring data. While MMIs derived from nekton or vegetation metrics alone were strongly correlated with human disturbance (r values from −0.80 to −0.93), an MMI derived from both vegetation and nekton metrics yielded an exceptionally strong correlation with disturbance (r = −0.96). Individual MMIs included from one to five metrics. The metric-assembly algorithm yielded parsimonious MMIs that exhibit the greatest possible correlations with disturbance in a way that is objective, efficient, and reproducible.

  16. Generalized Israel junction conditions for a fourth-order brane world

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcerzak, Adam; Dabrowski, Mariusz P.

    2008-01-15

    We discuss a general fourth-order theory of gravity on the brane. In general, the formulation of the junction conditions (except for Euler characteristics such as Gauss-Bonnet term) leads to the higher powers of the delta function and requires regularization. We suggest the way to avoid such a problem by imposing the metric and its first derivative to be regular at the brane, while the second derivative to have a kink, the third derivative of the metric to have a step function discontinuity, and no sooner as the fourth derivative of the metric to give the delta function contribution to themore » field equations. Alternatively, we discuss the reduction of the fourth-order gravity to the second-order theory by introducing an extra tensor field. We formulate the appropriate junction conditions on the brane. We prove the equivalence of both theories. In particular, we prove the equivalence of the junction conditions with different assumptions related to the continuity of the metric along the brane.« less

  17. Measures of model performance based on the log accuracy ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  18. Measures of model performance based on the log accuracy ratio

    DOE PAGES

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    2018-01-03

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  19. Structural phenotyping of stem cell-derived cardiomyocytes.

    PubMed

    Pasqualini, Francesco Silvio; Sheehy, Sean Paul; Agarwal, Ashutosh; Aratyn-Schaus, Yvonne; Parker, Kevin Kit

    2015-03-10

    Structural phenotyping based on classical image feature detection has been adopted to elucidate the molecular mechanisms behind genetically or pharmacologically induced changes in cell morphology. Here, we developed a set of 11 metrics to capture the increasing sarcomere organization that occurs intracellularly during striated muscle cell development. To test our metrics, we analyzed the localization of the contractile protein α-actinin in a variety of primary and stem-cell derived cardiomyocytes. Further, we combined these metrics with data mining algorithms to unbiasedly score the phenotypic maturity of human-induced pluripotent stem cell-derived cardiomyocytes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Massive graviton on arbitrary background: derivation, syzygies, applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Laura; Deffayet, Cédric; IHES, Institut des Hautes Études Scientifiques,Le Bois-Marie, 35 route de Chartres, F-91440 Bures-sur-Yvette

    2015-06-23

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a “reference metric' which is present in the non perturbative formulation. Wemore » show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.« less

  1. Massive graviton on arbitrary background: derivation, syzygies, applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Laura; Deffayet, Cédric; Strauss, Mikael von, E-mail: bernard@iap.fr, E-mail: deffayet@iap.fr, E-mail: strauss@iap.fr

    2015-06-01

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a ''reference metric' which is present in the non perturbative formulation. Wemore » show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.« less

  2. Quantum Adiabatic Brachistochrone

    NASA Astrophysics Data System (ADS)

    Rezakhani, A. T.; Kuo, W.-J.; Hamma, A.; Lidar, D. A.; Zanardi, P.

    2009-08-01

    We formulate a time-optimal approach to adiabatic quantum computation (AQC). A corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. This geometrization of AQC is demonstrated through two examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.

  3. Quantum adiabatic brachistochrone.

    PubMed

    Rezakhani, A T; Kuo, W-J; Hamma, A; Lidar, D A; Zanardi, P

    2009-08-21

    We formulate a time-optimal approach to adiabatic quantum computation (AQC). A corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. This geometrization of AQC is demonstrated through two examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.

  4. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences

    PubMed Central

    Rivolo, Simone; Asrress, Kaleab N.; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø.; Grøndal, Anne K.; Hønge, Jesper L.; Kim, Won Y.; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P.; Lee, Jack

    2014-01-01

    Background Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky–Golay filter, to reduce the high frequency acquisition noise. Methods The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. Results and Conclusion The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%). PMID:25187852

  5. Study of the Ernst metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esteban, E.P.

    In this thesis some properties of the Ernst metric are studied. This metric could provide a model for a Schwarzschild black hole immersed in a magnetic field. In chapter I, some standard propertiess of the Ernst's metric such as the affine connections, the Riemann, the Ricci, and the Weyl conformal tensor are calculated. In chapter II, the geodesics described by test particles in the Ernst space-time are studied. As an application a formula for the perihelion shift is derived. In the last chapter a null tetrad analysis of the Ernst metric is carried out and the resulting formalism applied tomore » the study of three problems. First, the algebraic classification of the Ernst metric is determined to be of type I in the Petrov scheme. Secondly, an explicit formula for the Gaussian curvature for the event horizon is derived. Finally, the form of the electromagnetic field is evaluated.« less

  6. Ground-state information geometry and quantum criticality in an inhomogeneous spin model

    NASA Astrophysics Data System (ADS)

    Ma, Yu-Quan

    2015-09-01

    We investigate the ground-state Riemannian metric and the cyclic quantum distance of an inhomogeneous quantum spin-1/2 chain in a transverse field. This model can be diagonalized by using a general canonical transformation to the fermionic Hamiltonian mapped from the spin system. The ground-state Riemannian metric is derived exactly on a parameter manifold ring S1, which is introduced by performing a gauge transformation to the spin Hamiltonian through a twist operator. The cyclic ground-state quantum distance and the second derivative of the ground-state energy are studied in different exchange coupling parameter regions. Particularly, we show that, in the case of exchange coupling parameter Ja = Jb, the quantum ferromagnetic phase can be characterized by an invariant quantum distance and this distance will decay to zero rapidly in the paramagnetic phase. Project supported by the National Natural Science Foundation of China (Grant Nos. 11404023 and 11347131).

  7. Insight from uncertainty: bootstrap-derived diffusion metrics differentially predict memory function among older adults.

    PubMed

    Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M

    2016-01-01

    Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.

  8. Black holes thermodynamics in a new kind of noncommutative geometry

    NASA Astrophysics Data System (ADS)

    Faizal, Mir; Amorim, R. G. G.; Ulhoa, S. C.

    Motivated by the energy-dependent metric in gravity’s rainbow, we will propose a new kind of energy-dependent noncommutative geometry. It will be demonstrated that like gravity’s rainbow, this new noncommutative geometry is described by an energy-dependent metric. We will analyze the effect of this noncommutative deformation on the Schwarzschild black holes and Kerr black holes. We will perform our analysis by relating the commutative and this new energy-dependent noncommutative metrics using an energy-dependent Moyal star product. We will also analyze the thermodynamics of these new noncommutative black hole solutions. We will explicitly derive expression for the corrected entropy and temperature for these black hole solutions. It will be demonstrated that, for these deformed solutions, black remnants cannot form. This is because these corrections increase rather than reduce the temperature of the black holes.

  9. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  10. Rotating metric in nonsingular infinite derivative theories of gravity

    NASA Astrophysics Data System (ADS)

    Cornell, Alan S.; Harmsen, Gerhard; Lambiase, Gaetano; Mazumdar, Anupam

    2018-05-01

    In this paper, we will provide a nonsingular rotating spacetime metric for a ghost-free infinite derivative theory of gravity in a linearized limit. We will provide the predictions for the Lense-Thirring effect for a slowly rotating system, and how it is compared with that from general relativity.

  11. Stress in Harmonic Serialism

    ERIC Educational Resources Information Center

    Pruitt, Kathryn Ringler

    2012-01-01

    This dissertation proposes a model of word stress in a derivational version of Optimality Theory (OT) called Harmonic Serialism (HS; Prince and Smolensky 1993/2004, McCarthy 2000, 2006, 2010a). In this model, the metrical structure of a word is derived through a series of optimizations in which the "best" metrical foot is chosen…

  12. Measuring Sustainability: Deriving Metrics From Objectives (Presentation)

    EPA Science Inventory

    The definition of 'sustain', to keep in existence, provides some insight into the metrics that are required to measure sustainability and adequately respond to assure sustainability. Keeping something in existence implies temporal and spatial contexts and requires metrics that g...

  13. Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan; Jung, Yoon; Lee, Hanbong; Schier, Sebastian; Okuniek, Nikolai; Gerdes, Ingrid

    2016-01-01

    In this work, fast-time simulations have been conducted using SARDA tools at Hamburg airport by NASA and real-time simulations using CADEO and TRACC with the NLR ATM Research Simulator (NARSIM) by DLR. The outputs are analyzed using a set of common metrics collaborated between DLR and NASA. The proposed metrics are derived from International Civil Aviation Organization (ICAO)s Key Performance Areas (KPAs) in capability, efficiency, predictability and environment, and adapted to simulation studies. The results are examined to explore and compare the merits and shortcomings of the two approaches using the common performance metrics. Particular attention is paid to the concept of the close-loop, trajectory-based taxi as well as the application of US concept to the European airport. Both teams consider the trajectory-based surface operation concept a critical technology advance in not only addressing the current surface traffic management problems, but also having potential application in unmanned vehicle maneuver on airport surface, such as autonomous towing or TaxiBot [6][7] and even Remote Piloted Aircraft (RPA). Based on this work, a future integration of TRACC and SOSS is described aiming at bringing conflict-free trajectory-based operation concept to US airport.

  14. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  15. Aluminum-Mediated Formation of Cyclic Carbonates: Benchmarking Catalytic Performance Metrics.

    PubMed

    Rintjema, Jeroen; Kleij, Arjan W

    2017-03-22

    We report a comparative study on the activity of a series of fifteen binary catalysts derived from various reported aluminum-based complexes. A benchmarking of their initial rates in the coupling of various terminal and internal epoxides in the presence of three different nucleophilic additives was carried out, providing for the first time a useful comparison of activity metrics in the area of cyclic organic carbonate formation. These investigations provide a useful framework for how to realistically valorize relative reactivities and which features are important when considering the ideal operational window of each binary catalyst system. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Intravoxel Incoherent Motion–derived Histogram Metrics for Assessment of Response after Combined Chemotherapy and Radiation Therapy in Rectal Cancer: Initial Experience and Comparison between Single-Section and Volumetric Analyses

    PubMed Central

    Vargas, Hebert Alberto; Lakhman, Yulia; Sudre, Romain; Do, Richard K. G.; Bibeau, Frederic; Azria, David; Assenat, Eric; Molinari, Nicolas; Pierredon, Marie-Ange; Rouanet, Philippe; Guiu, Boris

    2016-01-01

    Purpose To determine the diagnostic performance of intravoxel incoherent motion (IVIM) parameters and apparent diffusion coefficient (ADC) to assess response to combined chemotherapy and radiation therapy (CRT) in patients with rectal cancer by using histogram analysis derived from whole-tumor volumes and single-section regions of interest (ROIs). Materials and Methods The institutional review board approved this retrospective study of 31 patients with rectal cancer who underwent magnetic resonance (MR) imaging before and after CRT, including diffusion-weighted imaging with 34 b values prior to surgery. Patient consent was not required. ADC, perfusion-related diffusion fraction (f), slow diffusion coefficient (D), and fast diffusion coefficient (D*) were calculated on MR images acquired before and after CRT by using biexponential fitting. ADC and IVIM histogram metrics and median values were obtained by using whole-tumor volume and single-section ROI analyses. All ADC and IVIM parameters obtained before and after CRT were compared with histopathologic findings by using t tests with Holm-Sidak correction. Receiver operating characteristic curves were generated to evaluate the diagnostic performance of IVIM parameters derived from whole-tumor volume and single-section ROIs for prediction of histopathologic response. Results Extreme values aside, results of histogram analysis of ADC and IVIM were equivalent to median values for tumor response assessment (P > .06). Prior to CRT, none of the median ADC and IVIM diffusion metrics correlated with subsequent tumor response (P > .36). Median D and ADC values derived from either whole-volume or single-section analysis increased significantly after CRT (P ≤ .01) and were significantly higher in good versus poor responders (P ≤ .02). Median IVIM f and D* values did not significantly change after CRT and were not associated with tumor response to CRT (P > .36). Interobserver agreement was excellent for whole-tumor volume analysis (range, 0.91–0.95) but was only moderate for single-section ROI analysis (range, 0.50–0.63). Conclusion Median D and ADC values obtained after CRT were useful for discrimination between good and poor responders. Histogram metrics did not add to the median values for assessment of tumor response. Volumetric analysis demonstrated better interobserver reproducibility when compared with single-section ROI analysis. © RSNA, 2016 Online supplemental material is available for this article. PMID:26919562

  17. A comparison of quantum limited dose and noise equivalent dose

    NASA Astrophysics Data System (ADS)

    Job, Isaias D.; Boyce, Sarah J.; Petrillo, Michael J.; Zhou, Kungang

    2016-03-01

    Quantum-limited-dose (QLD) and noise-equivalent-dose (NED) are performance metrics often used interchangeably. Although the metrics are related, they are not equivalent unless the treatment of electronic noise is carefully considered. These metrics are increasingly important to properly characterize the low-dose performance of flat panel detectors (FPDs). A system can be said to be quantum-limited when the Signal-to-noise-ratio (SNR) is proportional to the square-root of x-ray exposure. Recent experiments utilizing three methods to determine the quantum-limited dose range yielded inconsistent results. To investigate the deviation in results, generalized analytical equations are developed to model the image processing and analysis of each method. We test the generalized expression for both radiographic and fluoroscopic detectors. The resulting analysis shows that total noise content of the images processed by each method are inherently different based on their readout scheme. Finally, it will be shown that the NED is equivalent to the instrumentation-noise-equivalent-exposure (INEE) and furthermore that the NED is derived from the quantum-noise-only method of determining QLD. Future investigations will measure quantum-limited performance of radiographic panels with a modified readout scheme to allow for noise improvements similar to measurements performed with fluoroscopic detectors.

  18. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Treesearch

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  19. Opportunities for High-Value Bioblendstocks to Enable Advanced Light- and Heavy-Duty Engines: Insights from the Co-Optima Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, John T

    Co-Optima research and analysis have identified fuel properties that enable advanced light-duty and heavy-duty engines. There are a large number of blendstocks readily derived from biomass that possess beneficial properties. Key research needs have been identified for performance, technology, economic, and environmental metrics.

  20. Metric on the space of quantum states from relative entropy. Tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Man'ko, Vladimir I.; Marmo, Giuseppe; Ventriglia, Franco; Vitale, Patrizia

    2017-08-01

    In the framework of quantum information geometry, we derive, from quantum relative Tsallis entropy, a family of quantum metrics on the space of full rank, N level quantum states, by means of a suitably defined coordinate free differential calculus. The cases N=2, N=3 are discussed in detail and notable limits are analyzed. The radial limit procedure has been used to recover quantum metrics for lower rank states, such as pure states. By using the tomographic picture of quantum mechanics we have obtained the Fisher-Rao metric for the space of quantum tomograms and derived a reconstruction formula of the quantum metric of density states out of the tomographic one. A new inequality obtained for probabilities of three spin-1/2 projections in three perpendicular directions is proposed to be checked in experiments with superconducting circuits.

  1. Diffusion kurtosis imaging probes cortical alterations and white matter pathology following cuprizone induced demyelination and spontaneous remyelination

    PubMed Central

    Guglielmetti, C.; Veraart, J.; Roelant, E.; Mai, Z.; Daans, J.; Van Audekerke, J.; Naeyaert, M.; Vanhoutte, G.; Delgado y Palacios, R.; Praet, J.; Fieremans, E.; Ponsaerts, P.; Sijbers, J.; Van der Linden, A.; Verhoye, M.

    2016-01-01

    Although MRI is the gold standard for the diagnosis and monitoring of multiple sclerosis (MS), current conventional MRI techniques often fail to detect cortical alterations and provide little information about gliosis, axonal damage and myelin status of lesioned areas. Diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) provide sensitive and complementary measures of the neural tissue microstructure. Additionally, specific white matter tract integrity (WMTI) metrics modelling the diffusion in white matter were recently derived. In the current study we used the well-characterized cuprizone mouse model of central nervous system demyelination to assess the temporal evolution of diffusion tensor (DT), diffusion kurtosis tensor (DK) and WMTI-derived metrics following acute inflammatory demyelination and spontaneous remyelination. While DT-derived metrics were unable to detect cuprizone induced cortical alterations, the mean kurtosis (MK) and radial kurtosis (RK) were found decreased under cuprizone administration, as compared to age-matched controls, in both the motor and somatosensory cortices. The MK remained decreased in the motor cortices at the end of the recovery period, reflecting long lasting impairment of myelination. In white matter, DT, DK and WMTI-derived metrics enabled the detection of cuprizone induced changes differentially according to the stage and the severity of the lesion. More specifically, MK, RK and the axonal water fraction (AWF) were the most sensitive for the detection of cuprizone induced changes in the genu of the corpus callosum, a region less affected by cuprizone administration. Additionally, microgliosis was associated with an increase of MK and RK during the acute inflammatory demyelination phase. In regions undergoing severe demyelination, namely the body and splenium of the corpus callosum, DT-derived metrics, notably the mean diffusion (MD) and radial diffusion (RD), were among the best discriminators between cuprizone and control groups, hence highlighting their ability to detect both acute and long lasting changes. Interestingly, WMTI-derived metrics showed the aptitude to distinguish between the different stage of the disease. Both the intra-axonal diffusivity (Da) and the AWF were found to be decreased in the cuprizone treated group, Da specifically decreased during the acute inflammatory demyelinating phase whereas the AWF decrease was associated to the spontaneous remyelination and the recovery period. Altogether our results demonstrate that DKI is sensitive to alterations of cortical areas and provides, along with WMTI metrics, information that is complementary to DT-derived metrics for the characterization of demyelination in both white and grey matter and subsequent inflammatory processes associated with a demyelinating event. PMID:26525654

  2. Global discrimination of land cover types from metrics derived from AVHRR pathfinder data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeFries, R.; Hansen, M.; Townshend, J.

    1995-12-01

    Global data sets of land cover are a significant requirement for global biogeochemical and climate models. Remotely sensed satellite data is an increasingly attractive source for deriving these data sets due to the resulting internal consistency, reproducibility, and coverage in locations where ground knowledge is sparse. Seasonal changes in the greenness of vegetation, described in remotely sensed data as changes in the normalized difference vegetation index (NDVI) throughout the year, have been the basis for discriminating between cover types in previous attempts to derive land cover from AVHRR data at global and continental scales. This study examines the use ofmore » metrics derived from the NDVI temporal profile, as well as metrics derived from observations in red, infrared, and thermal bands, to improve discrimination between 12 cover types on a global scale. According to separability measures calculated from Bhattacharya distances, average separabilities improved by using 12 of the 16 metrics tested (1.97) compared to separabilities using 12 monthly NDVI values alone (1.88). Overall, the most robust metrics for discriminating between cover types were: mean NDVI, maximum NDVI, NDVI amplitude, AVHRR Band 2 (near-infrared reflectance) and Band 1 (red reflectance) corresponding to the time of maximum NDVI, and maximum land surface temperature. Deciduous and evergreen vegetation can be distinguished by mean NDVI, maximum NDVI, NDVI amplitude, and maximum land surface temperature. Needleleaf and broadleaf vegetation can be distinguished by either mean NDVI and NDVI amplitude or maximum NDVI and NDVI amplitude.« less

  3. The Metric System of Measurement (SI). Federal Register Notice of December 10, 1976.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC.

    This document provides a diagram illustrating the relationships between base units in the metric system and derived units with special names. Twenty-one derived units are included. The base units used are: measures of mass, length, time, amount of substance, electric current, thermo-dynamic temperature, luminous intensity, and plane and solid…

  4. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  5. Geometrizing adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Rezakhani, Ali; Kuo, Wan-Jung; Hamma, Alioscia; Lidar, Daniel; Zanardi, Paolo

    2010-03-01

    A time-optimal approach to adiabatic quantum computation (AQC) is formulated. The corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. We demonstrate this geometrization through some examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance. The underlying connection with quantum phase transitions is also explored.

  6. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  7. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    PubMed

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.

  8. Handbook for Metric Usage (First Edition).

    ERIC Educational Resources Information Center

    American Home Economics Association, Washington, DC.

    Guidelines for changing to the metric system of measurement with regard to all phases of home economics are presented in this handbook. Topics covered include the following: (1) history of the metric system, (2) the International System of Units (SI): derived units of length, mass, time, and electric current; temperature; luminous intensity;…

  9. Use of the temporal median and trimmed mean mitigates effects of respiratory motion in multiple-acquisition abdominal diffusion imaging

    NASA Astrophysics Data System (ADS)

    Jerome, N. P.; Orton, M. R.; d'Arcy, J. A.; Feiweier, T.; Tunariu, N.; Koh, D.-M.; Leach, M. O.; Collins, D. J.

    2015-01-01

    Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.

  10. Use of the temporal median and trimmed mean mitigates effects of respiratory motion in multiple-acquisition abdominal diffusion imaging.

    PubMed

    Jerome, N P; Orton, M R; d'Arcy, J A; Feiweier, T; Tunariu, N; Koh, D-M; Leach, M O; Collins, D J

    2015-01-21

    Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.

  11. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  12. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE PAGES

    Giera, Brian; Bukosky, Scott; Lee, Elaine; ...

    2018-01-23

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  13. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giera, Brian; Bukosky, Scott; Lee, Elaine

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  14. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  15. Performance metric comparison study for non-magnetic bi-stable energy harvesters

    NASA Astrophysics Data System (ADS)

    Udani, Janav P.; Wrigley, Cailin; Arrieta, Andres F.

    2017-04-01

    Energy harvesting employing non-linear systems offers considerable advantages over linear systems given the broadband resonant response which is favorable for applications involving diverse input vibrations. In this respect, the rich dynamics of bi-stable systems present a promising means for harvesting vibrational energy from ambient sources. Harvesters deriving their bi-stability from thermally induced stresses as opposed to magnetic forces are receiving significant attention as it reduces the need for ancillary components and allows for bio- compatible constructions. However, the design of these bi-stable harvesters still requires further optimization to completely exploit the dynamic behavior of these systems. This study presents a comparison of the harvesting capabilities of non-magnetic, bi-stable composite laminates under variations in the design parameters as evaluated utilizing established power metrics. Energy output characteristics of two bi-stable composite laminate plates with a piezoelectric patch bonded on the top surface are experimentally investigated for variations in the thickness ratio and inertial mass positions for multiple load conditions. A particular design configuration is found to perform better over the entire range of testing conditions which include single and multiple frequency excitation, thus indicating that design optimization over the geometry of the harvester yields robust performance. The experimental analysis further highlights the need for appropriate design guidelines for optimization and holistic performance metrics to account for the range of operational conditions.

  16. Quantifying and visualizing site performance in clinical trials.

    PubMed

    Yang, Eric; O'Donovan, Christopher; Phillips, JodiLyn; Atkinson, Leone; Ghosh, Krishnendu; Agrafiotis, Dimitris K

    2018-03-01

    One of the keys to running a successful clinical trial is the selection of high quality clinical sites, i.e., sites that are able to enroll patients quickly, engage them on an ongoing basis to prevent drop-out, and execute the trial in strict accordance to the clinical protocol. Intuitively, the historical track record of a site is one of the strongest predictors of its future performance; however, issues such as data availability and wide differences in protocol complexity can complicate interpretation. Here, we demonstrate how operational data derived from central laboratory services can provide key insights into the performance of clinical sites and help guide operational planning and site selection for new clinical trials. Our methodology uses the metadata associated with laboratory kit shipments to clinical sites (such as trial and anonymized patient identifiers, investigator names and addresses, sample collection and shipment dates, etc.) to reconstruct the complete schedule of patient visits and derive insights about the operational performance of those sites, including screening, enrollment, and drop-out rates and other quality indicators. This information can be displayed in its raw form or normalized to enable direct comparison of site performance across studies of varied design and complexity. Leveraging Covance's market leadership in central laboratory services, we have assembled a database of operational metrics that spans more than 14,000 protocols, 1400 indications, 230,000 unique investigators, and 23 million patient visits and represents a significant fraction of all clinical trials run globally in the last few years. By analyzing this historical data, we are able to assess and compare the performance of clinical investigators across a wide range of therapeutic areas and study designs. This information can be aggregated across trials and geographies to gain further insights into country and regional trends, sometimes with surprising results. The use of operational data from Covance Central Laboratories provides a unique perspective into the performance of clinical sites with respect to many important metrics such as patient enrollment and retention. These metrics can, in turn, be used to guide operational planning and site selection for new clinical trials, thereby accelerating recruitment, improving quality, and reducing cost.

  17. A Correlation Between Quality Management Metrics and Technical Performance Measurement

    DTIC Science & Technology

    2007-03-01

    Engineering Working Group SME Subject Matter Expert SoS System of Systems SPI Schedule performance Index SSEI System of Systems Engineering and...and stated as such [Q, M , M &G]. The QMM equation is given by: 12 QMM=0.92RQM+0.67EPM+0.55RKM+1.86PM, where: RGM is the requirements management...schedule. Now if corrective action is not taken, the project/task will be completed behind schedule and over budget. m . As well as the derived

  18. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures

    PubMed Central

    Irfanoglu, M. Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B.; Sadeghi, Neda; Thomas, Cibu P.; Pierpaoli, Carlo

    2016-01-01

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. PMID:26931817

  19. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate Alignment of Anatomical Structures.

    PubMed

    Irfanoglu, M Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B; Sadeghi, Neda; Thomas, Cibu P; Pierpaoli, Carlo

    2016-05-15

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  1. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  2. Guiding optimal biofuels :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paap, Scott M.; West, Todd H.; Manley, Dawn Kataoka

    2013-01-01

    In the current study, processes to produce either ethanol or a representative fatty acid ethyl ester (FAEE) via the fermentation of sugars liberated from lignocellulosic materials pretreated in acid or alkaline environments are analyzed in terms of economic and environmental metrics. Simplified process models are introduced and employed to estimate process performance, and Monte Carlo analyses were carried out to identify key sources of uncertainty and variability. We find that the near-term performance of processes to produce FAEE is significantly worse than that of ethanol production processes for all metrics considered, primarily due to poor fermentation yields and higher electricitymore » demands for aerobic fermentation. In the longer term, the reduced cost and energy requirements of FAEE separation processes will be at least partially offset by inherent limitations in the relevant metabolic pathways that constrain the maximum yield potential of FAEE from biomass-derived sugars.« less

  3. Feeling lucky? Using search engines to assess perceptions of urban sustainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keirstead, James

    2009-02-15

    The sustainability of urban environments is an important issue at both local and international scales. Indicators are frequently used by decision-makers seeking to improve urban performance but these metrics can be dependent on sparse quantitative data. This paper explores the potential of an alternative approach, using an internet search engine to quickly gather qualitative data on the key attributes of cities. The method is applied to 21 world cities and the results indicate that, while the technique does shed light on direct and indirect aspects of sustainability, the validity of derived metrics as objective indicators of long-term sustainability is questionable.more » However the method's ability to provide subjective short-term assessments is more promising and it could therefore play an important role in participatory policy exercises such as public consultations. A number of promising technical improvements to the method's performance are also highlighted.« less

  4. Quantitative adaptation analytics for assessing dynamic systems of systems: LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauthier, John H.; Miner, Nadine E.; Wilson, Michael L.

    2015-01-01

    Our society is increasingly reliant on systems and interoperating collections of systems, known as systems of systems (SoS). These SoS are often subject to changing missions (e.g., nation- building, arms-control treaties), threats (e.g., asymmetric warfare, terrorism), natural environments (e.g., climate, weather, natural disasters) and budgets. How well can SoS adapt to these types of dynamic conditions? This report details the results of a three year Laboratory Directed Research and Development (LDRD) project aimed at developing metrics and methodologies for quantifying the adaptability of systems and SoS. Work products include: derivation of a set of adaptability metrics, a method for combiningmore » the metrics into a system of systems adaptability index (SoSAI) used to compare adaptability of SoS designs, development of a prototype dynamic SoS (proto-dSoS) simulation environment which provides the ability to investigate the validity of the adaptability metric set, and two test cases that evaluate the usefulness of a subset of the adaptability metrics and SoSAI for distinguishing good from poor adaptability in a SoS. Intellectual property results include three patents pending: A Method For Quantifying Relative System Adaptability, Method for Evaluating System Performance, and A Method for Determining Systems Re-Tasking.« less

  5. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  6. Implicit Contractive Mappings in Modular Metric and Fuzzy Metric Spaces

    PubMed Central

    Hussain, N.; Salimi, P.

    2014-01-01

    The notion of modular metric spaces being a natural generalization of classical modulars over linear spaces like Lebesgue, Orlicz, Musielak-Orlicz, Lorentz, Orlicz-Lorentz, and Calderon-Lozanovskii spaces was recently introduced. In this paper we investigate the existence of fixed points of generalized α-admissible modular contractive mappings in modular metric spaces. As applications, we derive some new fixed point theorems in partially ordered modular metric spaces, Suzuki type fixed point theorems in modular metric spaces and new fixed point theorems for integral contractions. In last section, we develop an important relation between fuzzy metric and modular metric and deduce certain new fixed point results in triangular fuzzy metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25003157

  7. In vivo quantification of demyelination and recovery using compartment-specific diffusion MRI metrics validated by electron microscopy.

    PubMed

    Jelescu, Ileana O; Zurek, Magdalena; Winters, Kerryanne V; Veraart, Jelle; Rajaratnam, Anjali; Kim, Nathanael S; Babb, James S; Shepherd, Timothy M; Novikov, Dmitry S; Kim, Sungheon G; Fieremans, Els

    2016-05-15

    There is a need for accurate quantitative non-invasive biomarkers to monitor myelin pathology in vivo and distinguish myelin changes from other pathological features including inflammation and axonal loss. Conventional MRI metrics such as T2, magnetization transfer ratio and radial diffusivity have proven sensitivity but not specificity. In highly coherent white matter bundles, compartment-specific white matter tract integrity (WMTI) metrics can be directly derived from the diffusion and kurtosis tensors: axonal water fraction, intra-axonal diffusivity, and extra-axonal radial and axial diffusivities. We evaluate the potential of WMTI to quantify demyelination by monitoring the effects of both acute (6weeks) and chronic (12weeks) cuprizone intoxication and subsequent recovery in the mouse corpus callosum, and compare its performance with that of conventional metrics (T2, magnetization transfer, and DTI parameters). The changes observed in vivo correlated with those obtained from quantitative electron microscopy image analysis. A 6-week intoxication produced a significant decrease in axonal water fraction (p<0.001), with only mild changes in extra-axonal radial diffusivity, consistent with patchy demyelination, while a 12-week intoxication caused a more marked decrease in extra-axonal radial diffusivity (p=0.0135), consistent with more severe demyelination and clearance of the extra-axonal space. Results thus revealed increased specificity of the axonal water fraction and extra-axonal radial diffusivity parameters to different degrees and patterns of demyelination. The specificities of these parameters were corroborated by their respective correlations with microstructural features: the axonal water fraction correlated significantly with the electron microscopy derived total axonal water fraction (ρ=0.66; p=0.0014) but not with the g-ratio, while the extra-axonal radial diffusivity correlated with the g-ratio (ρ=0.48; p=0.0342) but not with the electron microscopy derived axonal water fraction. These parameters represent promising candidates as clinically feasible biomarkers of demyelination and remyelination in the white matter. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Parrotfish Size: A Simple yet Useful Alternative Indicator of Fishing Effects on Caribbean Reefs?

    PubMed Central

    Vallès, Henri; Oxenford, Hazel A.

    2014-01-01

    There is great need to identify simple yet reliable indicators of fishing effects within the multi-species, multi-gear, data-poor fisheries of the Caribbean. Here, we investigate links between fishing pressure and three simple fish metrics, i.e. average fish weight (an estimate of average individual fish size), fish density and fish biomass, derived from (1) the parrotfish family, a ubiquitous herbivore family across the Caribbean, and (2) three fish groups of “commercial” carnivores including snappers and groupers, which are widely-used as indicators of fishing effects. We hypothesize that, because most Caribbean reefs are being heavily fished, fish metrics derived from the less vulnerable parrotfish group would exhibit stronger relationships with fishing pressure on today’s Caribbean reefs than those derived from the highly vulnerable commercial fish groups. We used data from 348 Atlantic and Gulf Rapid Reef Assessment (AGRRA) reef-surveys across the Caribbean to assess relationships between two independent indices of fishing pressure (one derived from human population density data, the other from open to fishing versus protected status) and the three fish metrics derived from the four aforementioned fish groups. We found that, although two fish metrics, average parrotfish weight and combined biomass of selected commercial species, were consistently negatively linked to the indices of fishing pressure across the Caribbean, the parrotfish metric consistently outranked the latter in the strength of the relationship, thus supporting our hypothesis. Overall, our study highlights that (assemblage-level) average parrotfish size might be a useful alternative indicator of fishing effects over the typical conditions of most Caribbean shallow reefs: moderate-to-heavy levels of fishing and low abundance of highly valued commercial species. PMID:24466009

  9. Parrotfish size: a simple yet useful alternative indicator of fishing effects on Caribbean reefs?

    PubMed

    Vallès, Henri; Oxenford, Hazel A

    2014-01-01

    There is great need to identify simple yet reliable indicators of fishing effects within the multi-species, multi-gear, data-poor fisheries of the Caribbean. Here, we investigate links between fishing pressure and three simple fish metrics, i.e. average fish weight (an estimate of average individual fish size), fish density and fish biomass, derived from (1) the parrotfish family, a ubiquitous herbivore family across the Caribbean, and (2) three fish groups of "commercial" carnivores including snappers and groupers, which are widely-used as indicators of fishing effects. We hypothesize that, because most Caribbean reefs are being heavily fished, fish metrics derived from the less vulnerable parrotfish group would exhibit stronger relationships with fishing pressure on today's Caribbean reefs than those derived from the highly vulnerable commercial fish groups. We used data from 348 Atlantic and Gulf Rapid Reef Assessment (AGRRA) reef-surveys across the Caribbean to assess relationships between two independent indices of fishing pressure (one derived from human population density data, the other from open to fishing versus protected status) and the three fish metrics derived from the four aforementioned fish groups. We found that, although two fish metrics, average parrotfish weight and combined biomass of selected commercial species, were consistently negatively linked to the indices of fishing pressure across the Caribbean, the parrotfish metric consistently outranked the latter in the strength of the relationship, thus supporting our hypothesis. Overall, our study highlights that (assemblage-level) average parrotfish size might be a useful alternative indicator of fishing effects over the typical conditions of most Caribbean shallow reefs: moderate-to-heavy levels of fishing and low abundance of highly valued commercial species.

  10. Monoparametric family of metrics derived from classical Jensen-Shannon divergence

    NASA Astrophysics Data System (ADS)

    Osán, Tristán M.; Bussandri, Diego G.; Lamberti, Pedro W.

    2018-04-01

    Jensen-Shannon divergence is a well known multi-purpose measure of dissimilarity between probability distributions. It has been proven that the square root of this quantity is a true metric in the sense that, in addition to the basic properties of a distance, it also satisfies the triangle inequality. In this work we extend this last result to prove that in fact it is possible to derive a monoparametric family of metrics from the classical Jensen-Shannon divergence. Motivated by our results, an application into the field of symbolic sequences segmentation is explored. Additionally, we analyze the possibility to extend this result into the quantum realm.

  11. Effective distances for epidemics spreading on complex networks.

    PubMed

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  12. Effective distances for epidemics spreading on complex networks

    NASA Astrophysics Data System (ADS)

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M.

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  13. Markov Modeling of Component Fault Growth over a Derived Domain of Feasible Output Control Effort Modifications

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.

  14. Condition assessment of nonlinear processes

    DOEpatents

    Hively, Lee M.; Gailey, Paul C.; Protopopescu, Vladimir A.

    2002-01-01

    There is presented a reliable technique for measuring condition change in nonlinear data such as brain waves. The nonlinear data is filtered and discretized into windowed data sets. The system dynamics within each data set is represented by a sequence of connected phase-space points, and for each data set a distribution function is derived. New metrics are introduced that evaluate the distance between distribution functions. The metrics are properly renormalized to provide robust and sensitive relative measures of condition change. As an example, these measures can be used on EEG data, to provide timely discrimination between normal, preseizure, seizure, and post-seizure states in epileptic patients. Apparatus utilizing hardware or software to perform the method and provide an indicative output is also disclosed.

  15. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Cannon, A. J.

    2015-12-01

    Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the ranking of this study may be changed when various GCMs are downscaled and evaluated. Nevertheless, it may be informative for end-users (i.e. modelers or water resources managers) to understand and select more suitable downscaling methods corresponding to priorities on regional applications.

  16. The Effect of Large Scale Climate Oscillations on the Land Surface Phenology of the Northern Polar Regions and Central Asia

    NASA Astrophysics Data System (ADS)

    de Beurs, K.; Henebry, G. M.; Owsley, B.; Sokolik, I. N.

    2016-12-01

    Land surface phenology metrics allow for the summarization of long image time series into a set of annual observations that describe the vegetated growing season. These metrics have been shown to respond to both large scale climatic and anthropogenic impacts. In this study we assemble a time series (2001 - 2014) of Moderate Resolution Imaging Spectroradiometer (MODIS) Nadir BRDF-Adjusted Reflectance data and land surface temperature data at 0.05º spatial resolution. We then derive land surface phenology metrics focusing on the peak of the growing season by fitting quadratic regression models using NDVI and Accumulated Growing Degree-Days (AGDD) derived from land surface temperature. We link the annual information on the peak timing, the thermal time to peak and the maximum of the growing season with five of the most important large scale climate oscillations: NAO, AO, PDO, PNA and ENSO. We demonstrate several significant correlations between the climate oscillations and the land surface phenology peak metrics for a range of different bioclimatic regions in both dryland Central Asia and the northern Polar Regions. We will then link the correlation results with trends derived by the seasonal Mann-Kendall trend detection method applied to several satellite derived vegetation and albedo datasets.

  17. Sketch Matching on Topology Product Graph.

    PubMed

    Liang, Shuang; Luo, Jun; Liu, Wenyin; Wei, Yichen

    2015-08-01

    Sketch matching is the fundamental problem in sketch based interfaces. After years of study, it remains challenging when there exists large irregularity and variations in the hand drawn sketch shapes. While most existing works exploit topology relations and graph representations for this problem, they are usually limited by the coarse topology exploration and heuristic (thus suboptimal) similarity metrics between graphs. We present a new sketch matching method with two novel contributions. We introduce a comprehensive definition of topology relations, which results in a rich and informative graph representation of sketches. For graph matching, we propose topology product graph that retains the full correspondence for matching two graphs. Based on it, we derive an intuitive sketch similarity metric whose exact solution is easy to compute. In addition, the graph representation and new metric naturally support partial matching, an important practical problem that received less attention in the literature. Extensive experimental results on a real challenging dataset and the superior performance of our method show that it outperforms the state-of-the-art.

  18. Newton gauge cosmological perturbations for static spherically symmetric modifications of the de Sitter metric

    NASA Astrophysics Data System (ADS)

    Santa Vélez, Camilo; Enea Romano, Antonio

    2018-05-01

    Static coordinates can be convenient to solve the vacuum Einstein's equations in presence of spherical symmetry, but for cosmological applications comoving coordinates are more suitable to describe an expanding Universe, especially in the framework of cosmological perturbation theory (CPT). Using CPT we develop a method to transform static spherically symmetric (SSS) modifications of the de Sitter solution from static coordinates to the Newton gauge. We test the method with the Schwarzschild de Sitter (SDS) metric and then derive general expressions for the Bardeen's potentials for a class of SSS metrics obtained by adding to the de Sitter metric a term linear in the mass and proportional to a general function of the radius. Using the gauge invariance of the Bardeen's potentials we then obtain a gauge invariant definition of the turn around radius. We apply the method to an SSS solution of the Brans-Dicke theory, confirming the results obtained independently by solving the perturbation equations in the Newton gauge. The Bardeen's potentials are then derived for new SSS metrics involving logarithmic, power law and exponential modifications of the de Sitter metric. We also apply the method to SSS metrics which give flat rotation curves, computing the radial energy density profile in comoving coordinates in presence of a cosmological constant.

  19. Performance of a normalized energy metric without jammer state information for an FH/MFSK system in worst case partial band jamming

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.

  20. Hyperspectral face recognition using improved inter-channel alignment based on qualitative prediction models.

    PubMed

    Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki

    2016-11-28

    A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

  1. Information theoretic approach for assessing image fidelity in photon-counting arrays.

    PubMed

    Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram

    2010-02-01

    The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.

  2. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  3. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  4. A Geometric Framework for the Kinematics of Crystals With Defects

    DTIC Science & Technology

    2006-02-01

    which parallel transport preserves dot products of vectors, i.e. r G G ¼ 0. It is called the Levi - Civita connection [57] or the Riemannian connection...yielding a null covariant derivative of the metric tensor is called a metric connection. The Levi – Civita connection of (8) is metric. Note that in...tensor formed by inserting the Levi – Civita con- nection (8) into (10). A geometric space B0 with metric G having R G ¼ 0 is called flat. One may show

  5. Deriving principal channel metrics from bank and long-profile geometry with the R package cmgo

    NASA Astrophysics Data System (ADS)

    Golly, Antonius; Turowski, Jens M.

    2017-09-01

    Landscape patterns result from landscape forming processes. This link can be exploited in geomorphological research by reversely analyzing the geometrical content of landscapes to develop or confirm theories of the underlying processes. Since rivers represent a dominant control on landscape formation, there is a particular interest in examining channel metrics in a quantitative and objective manner. For example, river cross-section geometry is required to model local flow hydraulics, which in turn determine erosion and thus channel dynamics. Similarly, channel geometry is crucial for engineering purposes, water resource management, and ecological restoration efforts. These applications require a framework to capture and derive the data. In this paper we present an open-source software tool that performs the calculation of several channel metrics (length, slope, width, bank retreat, knickpoints, etc.) in an objective and reproducible way based on principal bank geometry that can be measured in the field or in a GIS. Furthermore, the software provides a framework to integrate spatial features, for example the abundance of species or the occurrence of knickpoints. The program is available at https://github.com/AntoniusGolly/cmgo and is free to use, modify, and redistribute under the terms of the GNU General Public License version 3 as published by the Free Software Foundation.

  6. Lidar aboveground vegetation biomass estimates in shrublands: Prediction, uncertainties and application to coarser scales

    USGS Publications Warehouse

    Li, Aihua; Dhakal, Shital; Glenn, Nancy F.; Spaete, Luke P.; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan

    2017-01-01

    Our study objectives were to model the aboveground biomass in a xeric shrub-steppe landscape with airborne light detection and ranging (Lidar) and explore the uncertainty associated with the models we created. We incorporated vegetation vertical structure information obtained from Lidar with ground-measured biomass data, allowing us to scale shrub biomass from small field sites (1 m subplots and 1 ha plots) to a larger landscape. A series of airborne Lidar-derived vegetation metrics were trained and linked with the field-measured biomass in Random Forests (RF) regression models. A Stepwise Multiple Regression (SMR) model was also explored as a comparison. Our results demonstrated that the important predictors from Lidar-derived metrics had a strong correlation with field-measured biomass in the RF regression models with a pseudo R2 of 0.76 and RMSE of 125 g/m2 for shrub biomass and a pseudo R2 of 0.74 and RMSE of 141 g/m2 for total biomass, and a weak correlation with field-measured herbaceous biomass. The SMR results were similar but slightly better than RF, explaining 77–79% of the variance, with RMSE ranging from 120 to 129 g/m2 for shrub and total biomass, respectively. We further explored the computational efficiency and relative accuracies of using point cloud and raster Lidar metrics at different resolutions (1 m to 1 ha). Metrics derived from the Lidar point cloud processing led to improved biomass estimates at nearly all resolutions in comparison to raster-derived Lidar metrics. Only at 1 m were the results from the point cloud and raster products nearly equivalent. The best Lidar prediction models of biomass at the plot-level (1 ha) were achieved when Lidar metrics were derived from an average of fine resolution (1 m) metrics to minimize boundary effects and to smooth variability. Overall, both RF and SMR methods explained more than 74% of the variance in biomass, with the most important Lidar variables being associated with vegetation structure and statistical measures of this structure (e.g., standard deviation of height was a strong predictor of biomass). Using our model results, we developed spatially-explicit Lidar estimates of total and shrub biomass across our study site in the Great Basin, U.S.A., for monitoring and planning in this imperiled ecosystem.

  7. WE-FG-202-07: An MRI-Based Approach to Quantify Radiation-Induced Normal Tissue Injury Applied to Trismus After Head and Neck Cancer Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thor, M; Tyagi, N; Saleh, Z

    Purpose: The aim of this study was to investigate if quantitative MRI-derived metrics from four masticatory muscles could explain mouth-opening limitation/trismus following intensity-modulated radiotherapy (IMRT) for head and neck cancer (HNC). Methods: Fifteen intensity-based MRI metrics were derived from the masseter, lateral and medial pterygoid, and temporalis in T1-weighted scans acquired pre- and post gadolinium injection (T1Pre, T1Post) of 16, of in total 20, patients (8 symptomatic; 8 asymptomatic age/sex/tumor location-matched) treated with IMRT to 70 Gy (median) for HNC in 2005–2009. Trismus was defined as “≥decreased range of motion without impaired eating” (CTCAE.v.3: ≥Grade 1). Trismus status was monitoredmore » and MRI scans acquired within 1y post-RT. All MRI-derived metrics were assessed as ΔS=S(T1Pre)-S(T1Post)/S(T1Pre), and were normalized to the corresponding metric of a non-irradiated volume defined in each scan. The T1Pre structures were propagated onto the RT dose distribution, and the max and mean dose (Dmax, Dmean) were extracted. The MRI-derived metrics, Dmax, and Dmean were compared between trismus and non-trismus patients. A two-sided Wilcoxon Signed rank test-based p-value≤0.05 denoted significance. Results: For all four muscles the population mean of Dmax and Dmean was higher for patients with trismus compared to patients without trismus (ΔDmax=2.3–4.9 Gy; ΔDmean=and 2.0–3.8 Gy). The standard deviation (SD), the variance, and the minimum value (min) of ΔS were significantly (p=0.04–0.05) different between patients with and without trismus with trismus patients having significantly lower SD (population median: −0.53 vs. −0.31) and variance (−2.09 vs. −0.73) of the masseter, and significantly lower min of the medial pterygoid (−0.36 vs. −0.19). Conclusion: Quantitative MRI-derived metrics of two masticatory muscles were significantly different between patients with and without trismus following RT for HNC. These metrics could serve as image-based biomarkers to better understand the RT-induced etiology behind trismus, but should be further investigated in the complete cohort.« less

  8. On noncommutative Levi-Civita connections

    NASA Astrophysics Data System (ADS)

    Peterka, Mira A.; Sheu, Albert Jeu-Liang

    We make some observations about Rosenberg’s Levi-Civita connections on noncommutative tori, noting the non-uniqueness of general torsion-free metric-compatible connections without prescribed connection operator for the inner *-derivations, the nontrivial curvature form of the inner *-derivations, and the validity of the Gauss-Bonnet theorem for two classes of nonconformal deformations of the flat metric on the noncommutative two-tori, including the case of noncommuting scalings along the principal directions of a two-torus.

  9. Cross-layer protocol design for QoS optimization in real-time wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2010-04-01

    The metrics of quality of service (QoS) for each sensor type in a wireless sensor network can be associated with metrics for multimedia that describe the quality of fused information, e.g., throughput, delay, jitter, packet error rate, information correlation, etc. These QoS metrics are typically set at the highest, or application, layer of the protocol stack to ensure that performance requirements for each type of sensor data are satisfied. Application-layer metrics, in turn, depend on the support of the lower protocol layers: session, transport, network, data link (MAC), and physical. The dependencies of the QoS metrics on the performance of the higher layers of the Open System Interconnection (OSI) reference model of the WSN protocol, together with that of the lower three layers, are the basis for a comprehensive approach to QoS optimization for multiple sensor types in a general WSN model. The cross-layer design accounts for the distributed power consumption along energy-constrained routes and their constituent nodes. Following the author's previous work, the cross-layer interactions in the WSN protocol are represented by a set of concatenated protocol parameters and enabling resource levels. The "best" cross-layer designs to achieve optimal QoS are established by applying the general theory of martingale representations to the parameterized multivariate point processes (MVPPs) for discrete random events occurring in the WSN. Adaptive control of network behavior through the cross-layer design is realized through the parametric factorization of the stochastic conditional rates of the MVPPs. The cross-layer protocol parameters for optimal QoS are determined in terms of solutions to stochastic dynamic programming conditions derived from models of transient flows for heterogeneous sensor data and aggregate information over a finite time horizon. Markov state processes, embedded within the complex combinatorial history of WSN events, are more computationally tractable and lead to simplifications for any simulated or analytical performance evaluations of the cross-layer designs.

  10. A MULTIDISCIPLINARY APPROACH TO SUB-NATIONAL SUSTAINABILITY

    EPA Science Inventory

    The USEPA is investigating sustainability metrics from an economic and environmental perspective to determine their applicability at a sub-national level. Metrics are derived from Ecological Footprint, Emergy Analysis, Net Regional Product, and Fisher Information. We chose severa...

  11. SUB-NATIONAL SUSTAINABILITY FROM A MULTIDISCIPLINARY APPROACH

    EPA Science Inventory

    The USEPA is investigating sustainability metrics from an economic and environmental perspective to determine their applicability at a sub-national level. Metrics are derived from Ecological Footprint, Emergy Analysis, Net Regional Product, and Fisher Information. We chose severa...

  12. A comprehensive model for x-ray projection imaging system efficiency and image quality characterization in the presence of scattered radiation

    NASA Astrophysics Data System (ADS)

    Monnin, P.; Verdun, F. R.; Bosmans, H.; Rodríguez Pérez, S.; Marshall, N. W.

    2017-07-01

    This work proposes a method for assessing the detective quantum efficiency (DQE) of radiographic imaging systems that include both the x-ray detector and the antiscatter device. Cascaded linear analysis of the antiscatter device efficiency (DQEASD) with the x-ray detector DQE is used to develop a metric of system efficiency (DQEsys); the new metric is then related to the existing system efficiency parameters of effective DQE (eDQE) and generalized DQE (gDQE). The effect of scatter on signal transfer was modelled through its point spread function (PSF), leading to an x-ray beam transfer function (BTF) that multiplies with the classical presampling modulation transfer function (MTF) to give the system MTF. Expressions are then derived for the influence of scattered radiation on signal-difference to noise ratio (SDNR) and contrast-detail (c-d) detectability. The DQEsys metric was tested using two digital mammography systems, for eight x-ray beams (four with and four without scatter), matched in terms of effective energy. The model was validated through measurements of contrast, SDNR and MTF for poly(methyl)methacrylate thicknesses covering the range of scatter fractions expected in mammography. The metric also successfully predicted changes in c-d detectability for different scatter conditions. Scatter fractions for the four beams with scatter were established with the beam stop method using an extrapolation function derived from the scatter PSF, and validated through Monte Carlo (MC) simulations. Low-frequency drop of the MTF from scatter was compared to both theory and MC calculations. DQEsys successfully quantified the influence of the grid on SDNR and accurately gave the break-even object thickness at which system efficiency was improved by the grid. The DQEsys metric is proposed as an extension of current detector characterization methods to include a performance evaluation in the presence of scattered radiation, with an antiscatter device in place.

  13. Impulsive spherical gravitational waves

    NASA Astrophysics Data System (ADS)

    Aliev, A. N.; Nutku, Y.

    2001-03-01

    Penrose's identification with warp provides the general framework for constructing the continuous form of impulsive gravitational wave metrics. We present the two-component spinor formalism for the derivation of the full family of impulsive spherical gravitational wave metrics which brings out the power in identification with warp and leads to the simplest derivation of exact solutions. These solutions of the Einstein vacuum field equations are obtained by cutting Minkowski space into two pieces along a null cone and re-identifying them with warp which is given by an arbitrary nonlinear holomorphic transformation. Using two-component spinor techniques we construct a new metric describing an impulsive spherical gravitational wave where the vertex of the null cone lies on a worldline with constant acceleration.

  14. Conditions for defocusing around more general metrics in infinite derivative gravity

    NASA Astrophysics Data System (ADS)

    Edholm, James

    2018-04-01

    Infinite derivative gravity is able to resolve the big bang curvature singularity present in general relativity by using a simplifying ansatz. We show that it can also avoid the Hawking-Penrose singularity, by allowing defocusing of null rays through the Raychaudhuri equation. This occurs not only in the minimal case where we ignore the matter contribution but also in the case where matter plays a key role. We investigate the conditions for defocusing for the general case where this ansatz applies and also for more specific metrics, including a general Friedmann-Robertson-Walker metric and three specific choices of the scale factor which produce a bouncing Friedmann-Robertson-Walker universe.

  15. Valuing fire planning alternatives in forest restoration: using derived demand to integrate economics with ecological restoration.

    PubMed

    Rideout, Douglas B; Ziesler, Pamela S; Kernohan, Nicole J

    2014-08-01

    Assessing the value of fire planning alternatives is challenging because fire affects a wide array of ecosystem, market, and social values. Wildland fire management is increasingly used to address forest restoration while pragmatic approaches to assessing the value of fire management have yet to be developed. Earlier approaches to assessing the value of forest management relied on connecting site valuation with management variables. While sound, such analysis is too narrow to account for a broad range of ecosystem services. The metric fire regime condition class (FRCC) was developed from ecosystem management philosophy, but it is entirely biophysical. Its lack of economic information cripples its utility to support decision-making. We present a means of defining and assessing the deviation of a landscape from its desired fire management condition by re-framing the fire management problem as one of derived demand. This valued deviation establishes a performance metric for wildland fire management. Using a case study, we display the deviation across a landscape and sum the deviations to produce a summary metric. This summary metric is used to assess the value of alternative fire management strategies on improving the fire management condition toward its desired state. It enables us to identify which sites are most valuable to restore, even when they are in the same fire regime condition class. The case study site exemplifies how a wide range of disparate values, such as watershed, wildlife, property and timber, can be incorporated into a single landscape assessment. The analysis presented here leverages previous research on environmental capital value and non-market valuation by integrating ecosystem management, restoration, and microeconomics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Texture metric that predicts target detection performance

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.

    2015-12-01

    Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.

  17. Validation Metrics for Improving Our Understanding of Turbulent Transport - Moving Beyond Proof by Pretty Picture and Loud Assertion

    NASA Astrophysics Data System (ADS)

    Holland, C.

    2013-10-01

    Developing validated models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. This tutorial will present an overview of the key guiding principles and practices for state-of-the-art validation studies, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. The primary focus of the talk will be the development of quantiatve validation metrics, which are essential for moving beyond qualitative and subjective assessments of model performance and fidelity. Particular emphasis and discussion is given to (i) the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment, and (ii) the importance of robust uncertainty quantification and its inclusion within the metrics. To illustrate these concepts, we first review the structure and key insights gained from commonly used ``global'' transport model metrics (e.g. predictions of incremental stored energy or radially-averaged temperature), as well as their limitations. Building upon these results, a new form of turbulent transport metrics is then proposed, which focuses upon comparisons of predicted local gradients and fluctuation characteristics against observation. We demonstrate the utility of these metrics by applying them to simulations and modeling of a newly developed ``validation database'' derived from the results of a systematic, multi-year turbulent transport validation campaign on the DIII-D tokamak, in which comprehensive profile and fluctuation measurements have been obtained from a wide variety of heating and confinement scenarios. Finally, we discuss extensions of these metrics and their underlying design concepts to other areas of plasma confinement research, including both magnetohydrodynamic stability and integrated scenario modeling. Supported by the US DOE under DE-FG02-07ER54917 and DE-FC02-08ER54977.

  18. Online kinematic regulation by visual feedback for grasp versus transport during reach-to-pinch

    PubMed Central

    Nataraj, Raviraj; Pasluosta, Cristian; Li, Zong-Ming

    2014-01-01

    Purpose This study investigated novel kinematic performance parameters to understand regulation by visual feedback (VF) of the reaching hand on the grasp and transport components during the reach-to-pinch maneuver. Conventional metrics often signify discrete movement features to postulate sensory-based control effects (e.g., time for maximum velocity to signify feedback delay). The presented metrics of this study were devised to characterize relative vision-based control of the sub-movements across the entire maneuver. Methods Movement performance was assessed according to reduced variability and increased efficiency of kinematic trajectories. Variability was calculated as the standard deviation about the observed mean trajectory for a given subject and VF condition across kinematic derivatives for sub-movements of inter-pad grasp (distance between thumb and index finger-pads; relative orientation of finger-pads) and transport (distance traversed by wrist). A Markov analysis then examined the probabilistic effect of VF on which movement component exhibited higher variability over phases of the complete maneuver. Jerk-based metrics of smoothness (minimal jerk) and energy (integrated jerk-squared) were applied to indicate total movement efficiency with VF. Results/Discussion The reductions in grasp variability metrics with VF were significantly greater (p<0.05) compared to transport for velocity, acceleration, and jerk, suggesting separate control pathways for each component. The Markov analysis indicated that VF preferentially regulates grasp over transport when continuous control is modeled probabilistically during the movement. Efficiency measures demonstrated VF to be more integral for early motor planning of grasp than transport in producing greater increases in smoothness and trajectory adjustments (i.e., jerk-energy) early compared to late in the movement cycle. Conclusions These findings demonstrate the greater regulation by VF on kinematic performance of grasp compared to transport and how particular features of this relativistic control occur continually over the maneuver. Utilizing the advanced performance metrics presented in this study facilitated characterization of VF effects continuously across the entire movement in corroborating the notion of separate control pathways for each component. PMID:24968371

  19. LETTER TO THE EDITOR: Anti-self-dual Riemannian metrics without Killing vectors: can they be realized on K3?

    NASA Astrophysics Data System (ADS)

    Malykh, A. A.; Nutku, Y.; Sheftel, M. B.

    2003-11-01

    Explicit Riemannian metrics with Euclidean signature and anti-self-dual curvature that do not admit any Killing vectors are presented. The metric and the Riemann curvature scalars are homogeneous functions of degree zero in a single real potential and its derivatives. The solution for the potential is a sum of exponential functions which suggests that for the choice of a suitable domain of coordinates and parameters it can be the metric on a compact manifold. Then, by the theorem of Hitchin, it could be a class of metrics on K3, or on surfaces whose universal covering is K3.

  20. Grading smart sensors: Performance assessment and ranking using familiar scores like A+ to D-

    NASA Astrophysics Data System (ADS)

    Kessel, Ronald T.

    2005-03-01

    Starting with the supposition that the product of smart sensors - whether autonomous, networked, or fused - is in all cases information, it is shown here using information theory how a metric Q, ranging between 0 and 100%, can be derived to assess the quality of the information provided. The analogy with student grades is immediately evident and elaborated. As with student grades, numerical percentages suggest more precision than can be justified, so a conversion to letter grades A+ to D- is desirable. Owing to the close analogy with familiar academic grades, moreover, the new grade is a measure of effectiveness (MOE) that commanders and decision makers should immediately appreciate and find quite natural, even if they do not care to follow the methodology behind the performance test, as they focus on higher-level strategic matters of sensor deployment or procurement. The metric is illustrated by translating three specialist performance tests - the Receiver Operating Characteristic (ROC) curve, the Constant False Alarm Rate (CFAR) approach, and confusion matrices - into letter grades for use then by strategists. Actual military and security systems are included among the examples.

  1. Cochlea-scaled spectral entropy predicts rate-invariant intelligibility of temporally distorted sentences.

    PubMed

    Stilp, Christian E; Kiefte, Michael; Alexander, Joshua M; Kluender, Keith R

    2010-10-01

    Some evidence, mostly drawn from experiments using only a single moderate rate of speech, suggests that low-frequency amplitude modulations may be particularly important for intelligibility. Here, two experiments investigated intelligibility of temporally distorted sentences across a wide range of simulated speaking rates, and two metrics were used to predict results. Sentence intelligibility was assessed when successive segments of fixed duration were temporally reversed (exp. 1), and when sentences were processed through four third-octave-band filters, the outputs of which were desynchronized (exp. 2). For both experiments, intelligibility decreased with increasing distortion. However, in exp. 2, intelligibility recovered modestly with longer desynchronization. Across conditions, performances measured as a function of proportion of utterance distorted converged to a common function. Estimates of intelligibility derived from modulation transfer functions predict a substantial proportion of the variance in listeners' responses in exp. 1, but fail to predict performance in exp. 2. By contrast, a metric of potential information, quantified as relative dissimilarity (change) between successive cochlear-scaled spectra, is introduced. This metric reliably predicts listeners' intelligibility across the full range of speaking rates in both experiments. Results support an information-theoretic approach to speech perception and the significance of spectral change rather than physical units of time.

  2. Continuous theory of active matter systems with metric-free interactions.

    PubMed

    Peshkov, Anton; Ngo, Sandrine; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco

    2012-08-31

    We derive a hydrodynamic description of metric-free active matter: starting from self-propelled particles aligning with neighbors defined by "topological" rules, not metric zones-a situation advocated recently to be relevant for bird flocks, fish schools, and crowds-we use a kinetic approach to obtain well-controlled nonlinear field equations. We show that the density-independent collision rate per particle characteristic of topological interactions suppresses the linear instability of the homogeneous ordered phase and the nonlinear density segregation generically present near threshold in metric models, in agreement with microscopic simulations.

  3. Real-time performance monitoring and management system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2007-06-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  4. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.

  5. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  6. Metric-driven harm: an exploration of unintended consequences of performance measurement.

    PubMed

    Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck

    2013-11-01

    Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.

  7. Performance assessment in brain-computer interface-based augmentative and alternative communication

    PubMed Central

    2013-01-01

    A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems. PMID:23680020

  8. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  9. Comparison of macroinvertebrate-derived stream quality metrics between snag and riffle habitats

    USGS Publications Warehouse

    Stepenuck, K.F.; Crunkilton, R.L.; Bozek, Michael A.; Wang, L.

    2008-01-01

    We compared benthic macroinvertebrate assemblage structure at snag and riffle habitats in 43 Wisconsin streams across a range of watershed urbanization using a variety of stream quality metrics. Discriminant analysis indicated that dominant taxa at riffles and snags differed; Hydropsychid caddisflies (Hydropsyche betteni and Cheumatopsyche spp.) and elmid beetles (Optioservus spp. and Stenemlis spp.) typified riffles, whereas isopods (Asellus intermedius) and amphipods (Hyalella azteca and Gammarus pseudolimnaeus) predominated in snags. Analysis of covariance indicated that samples from snag and riffle habitats differed significantly in their response to the urbanization gradient for the Hilsenhoff biotic index (BI), Shannon's diversity index, and percent of filterers, shredders, and pollution intolerant Ephemeroptera, Plecoptera, and Trichoptera (EPT) at each stream site (p ??? 0.10). These differences suggest that although macroinvertebrate assemblages present in either habitat type are sensitive to detecting the effects of urbanization, metrics derived from different habitats should not be intermixed when assessing stream quality through biomonitoring. This can be a limitation to resource managers who wish to compare water quality among streams where the same habitat type is not available at all stream locations, or where a specific habitat type (i.e., a riffle) is required to determine a metric value (i.e., BI). To account for differences in stream quality at sites lacking riffle habitat, snag-derived metric values can be adjusted based on those obtained from riffles that have been exposed to the same level of urbanization. Comparison of nonlinear regression equations that related stream quality metric values from the two habitat types to percent watershed urbanization indicated that snag habitats had on average 30.2 fewer percent EPT individuals, a lower diversity index value than riffles, and a BI value of 0.29 greater than riffles. ?? 2008 American Water Resources Association.

  10. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  11. Breast lesion characterization using whole-lesion histogram analysis with stretched-exponential diffusion model.

    PubMed

    Liu, Chunling; Wang, Kun; Li, Xiaodan; Zhang, Jine; Ding, Jie; Spuhler, Karl; Duong, Timothy; Liang, Changhong; Huang, Chuan

    2018-06-01

    Diffusion-weighted imaging (DWI) has been studied in breast imaging and can provide more information about diffusion, perfusion and other physiological interests than standard pulse sequences. The stretched-exponential model has previously been shown to be more reliable than conventional DWI techniques, but different diagnostic sensitivities were found from study to study. This work investigated the characteristics of whole-lesion histogram parameters derived from the stretched-exponential diffusion model for benign and malignant breast lesions, compared them with conventional apparent diffusion coefficient (ADC), and further determined which histogram metrics can be best used to differentiate malignant from benign lesions. This was a prospective study. Seventy females were included in the study. Multi-b value DWI was performed on a 1.5T scanner. Histogram parameters of whole lesions for distributed diffusion coefficient (DDC), heterogeneity index (α), and ADC were calculated by two radiologists and compared among benign lesions, ductal carcinoma in situ (DCIS), and invasive carcinoma confirmed by pathology. Nonparametric tests were performed for comparisons among invasive carcinoma, DCIS, and benign lesions. Comparisons of receiver operating characteristic (ROC) curves were performed to show the ability to discriminate malignant from benign lesions. The majority of histogram parameters (mean/min/max, skewness/kurtosis, 10-90 th percentile values) from DDC, α, and ADC were significantly different among invasive carcinoma, DCIS, and benign lesions. DDC 10% (area under curve [AUC] = 0.931), ADC 10% (AUC = 0.893), and α mean (AUC = 0.787) were found to be the best metrics in differentiating benign from malignant tumors among all histogram parameters derived from ADC and α, respectively. The combination of DDC 10% and α mean , using logistic regression, yielded the highest sensitivity (90.2%) and specificity (95.5%). DDC 10% and α mean derived from the stretched-exponential model provides more information and better diagnostic performance in differentiating malignancy from benign lesions than ADC parameters derived from a monoexponential model. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1701-1710. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Topographic Metric Predictions of Soil redistribution and Organic Carbon Distribution in Croplands

    NASA Astrophysics Data System (ADS)

    Mccarty, G.; Li, X.

    2017-12-01

    Landscape topography is a key factor controlling soil redistribution and soil organic carbon (SOC) distribution in Iowa croplands (USA). In this study, we adopted a combined approach based on carbon () and cesium (137Cs) isotope tracers, and digital terrain analysis to understand patterns of SOC redistribution and carbon sequestration dynamics as influenced by landscape topography in tilled cropland under long term corn/soybean management. The fallout radionuclide 137Cs was used to estimate soil redistribution rates and a Lidar-derived DEM was used to obtain a set of topographic metrics for digital terrain analysis. Soil redistribution rates and patterns of SOC distribution were examined across 560 sampling locations at two field sites as well as at larger scale within the watershed. We used δ13C content in SOC to partition C3 and C4 plant derived C density at 127 locations in one of the two field sites with corn being the primary source of C4 C. Topography-based models were developed to simulate SOC distribution and soil redistribution using stepwise ordinary least square regression (SOLSR) and stepwise principal component regression (SPCR). All topography-based models developed through SPCR and SOLSR demonstrated good simulation performance, explaining more than 62% variability in SOC density and soil redistribution rates across two field sites with intensive samplings. However, the SOLSR models showed lower reliability than the SPCR models in predicting SOC density at the watershed scale. Spatial patterns of C3-derived SOC density were highly related to those of SOC density. Topographic metrics exerted substantial influence on C3-derived SOC density with the SPCR model accounting for 76.5% of the spatial variance. In contrast C4 derived SOC density had poor spatial structure likely reflecting the substantial contribution of corn vegetation to recently sequestered SOC density. Results of this study highlighted the utility of topographic SPCR models for scaling field measurements of SOC density and soil redistribution rates to watershed scale which will allow watershed model to better predict fate of ecosystem C on agricultural landscapes.

  13. Effects of different correlation metrics and preprocessing factors on small-world brain functional networks: a resting-state functional MRI study.

    PubMed

    Liang, Xia; Wang, Jinhui; Yan, Chaogan; Shu, Ni; Xu, Ke; Gong, Gaolang; He, Yong

    2012-01-01

    Graph theoretical analysis of brain networks based on resting-state functional MRI (R-fMRI) has attracted a great deal of attention in recent years. These analyses often involve the selection of correlation metrics and specific preprocessing steps. However, the influence of these factors on the topological properties of functional brain networks has not been systematically examined. Here, we investigated the influences of correlation metric choice (Pearson's correlation versus partial correlation), global signal presence (regressed or not) and frequency band selection [slow-5 (0.01-0.027 Hz) versus slow-4 (0.027-0.073 Hz)] on the topological properties of both binary and weighted brain networks derived from them, and we employed test-retest (TRT) analyses for further guidance on how to choose the "best" network modeling strategy from the reliability perspective. Our results show significant differences in global network metrics associated with both correlation metrics and global signals. Analysis of nodal degree revealed differing hub distributions for brain networks derived from Pearson's correlation versus partial correlation. TRT analysis revealed that the reliability of both global and local topological properties are modulated by correlation metrics and the global signal, with the highest reliability observed for Pearson's-correlation-based brain networks without global signal removal (WOGR-PEAR). The nodal reliability exhibited a spatially heterogeneous distribution wherein regions in association and limbic/paralimbic cortices showed moderate TRT reliability in Pearson's-correlation-based brain networks. Moreover, we found that there were significant frequency-related differences in topological properties of WOGR-PEAR networks, and brain networks derived in the 0.027-0.073 Hz band exhibited greater reliability than those in the 0.01-0.027 Hz band. Taken together, our results provide direct evidence regarding the influences of correlation metrics and specific preprocessing choices on both the global and nodal topological properties of functional brain networks. This study also has important implications for how to choose reliable analytical schemes in brain network studies.

  14. Assessment of transport performance index for urban transport development strategies — Incorporating residents' preferences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambarwati, Lasmini, E-mail: L.Ambarwati@tudelft.nl; Department of Civil Engineering, Brawijaya University; Verhaeghe, Robert, E-mail: R.Verhaeghe@tudelft.nl

    The performance of urban transport depends on a variety of factors related to metropolitan structure; in particular, the patterns of commuting, roads and public transport (PT) systems. To evaluate urban transport planning efforts, there is a need for a metric expressing the aggregate performance of the city's transport systems which should relate to residents' preferences. The existing metrics have typically focused on a measure to express the proximity of job locations to residences. A Transport Performance Index (TPI) is proposed in which the total cost of transportation system (operational and environmental costs) is divided by willingness to pay (WTP) formore » transport plus the willingness to accept (WTA) the environmental effects on residents. Transport operational as well as the environmental costs are derived from a simulation of all transport systems, to particular designs of spatial development. Willingness to pay for transport and willingness to accept the environmental effects are derived from surveys among residents. Simulations were modelled of Surabaya's spatial structure and public transport expansion. The results indicate that the current TPI is high, which will double by 2030. With a hypothetical polycentric city structure and adjusted job housing balance, a lower index occurs because of the improvements in urban transport performance. A low index means that the residents obtain much benefit from the alternative proposed. This illustrates the importance of residents' preferences in urban spatial planning in order to achieve efficient urban transport. Applying the index suggests that city authorities should provide fair and equitable public transport systems for suburban residents in the effort to control the phenomenon of urban sprawl. This index is certainly a good tool and prospective benchmark for measuring sustainability in relation to urban development.« less

  15. System International d'Unites: Metric Measurement in Water Resources Engineering.

    ERIC Educational Resources Information Center

    Klingeman, Peter C.

    This pamphlet gives definitions and symbols for the basic and derived metric units, prefixes, and conversion factors for units frequently used in water resources. Included are conversion factors for units of area, work, heat, power, pressure, viscosity, flow rate, and others. (BB)

  16. Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions.

    PubMed

    Kramers, Matthew; Armstrong, Ryan; Bakhshmand, Saeed M; Fenster, Aaron; de Ribaupierre, Sandrine; Eagleson, Roy

    2014-01-01

    Image guidance can provide surgeons with valuable contextual information during a medical intervention. Often, image guidance systems require considerable infrastructure, setup-time, and operator experience to be utilized. Certain procedures performed at bedside are susceptible to navigational errors that can lead to complications. We present an application for mobile devices that can provide image guidance using augmented reality to assist in performing neurosurgical tasks. A methodology is outlined that evaluates this mode of visualization from the standpoint of perceptual localization, depth estimation, and pointing performance, in scenarios derived from a neurosurgical targeting task. By measuring user variability and speed we can report objective metrics of performance for our augmented reality guidance system.

  17. Inventory and transport of plastic debris in the Laurentian Great Lakes.

    PubMed

    Hoffman, Matthew J; Hittinger, Eric

    2017-02-15

    Plastic pollution in the world's oceans has received much attention, but there has been increasing concern about the high concentrations of plastic debris in the Laurentian Great Lakes. Using census data and methodologies used to study ocean debris we derive a first estimate of 9887 metric tonnes per year of plastic debris entering the Great Lakes. These estimates are translated into population-dependent particle inputs which are advected using currents from a hydrodynamic model to map the spatial distribution of plastic debris in the Great Lakes. Model results compare favorably with previously published sampling data. The samples are used to calibrate the model to derive surface microplastic mass estimates of 0.0211 metric tonnes in Lake Superior, 1.44 metric tonnes in Huron, and 4.41 metric tonnes in Erie. These results have many applications, including informing cleanup efforts, helping target pollution prevention, and understanding the inter-state or international flows of plastic pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Identification of abnormal motor cortex activation patterns in children with cerebral palsy by functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Khan, Bilal; Tian, Fenghua; Behbehani, Khosrow; Romero, Mario I.; Delgado, Mauricio R.; Clegg, Nancy J.; Smith, Linsley; Reid, Dahlia; Liu, Hanli; Alexandrakis, George

    2010-05-01

    We demonstrate the utility of functional near-infrared spectroscopy (fNIRS) as a tool for physicians to study cortical plasticity in children with cerebral palsy (CP). Motor cortex activation patterns were studied in five healthy children and five children with CP (8.4+/-2.3 years old in both groups) performing a finger-tapping protocol. Spatial (distance from center and area difference) and temporal (duration and time-to-peak) image metrics are proposed as potential biomarkers for differentiating abnormal cortical activation in children with CP from healthy pediatric controls. In addition, a similarity image-analysis concept is presented that unveils areas that have similar activation patterns as that of the maximum activation area, but are not discernible by visual inspection of standard activation images. Metrics derived from the images presenting areas of similarity are shown to be sensitive identifiers of abnormal activation patterns in children with CP. Importantly, the proposed similarity concept and related metrics may be applicable to other studies for the identification of cortical activation patterns by fNIRS.

  19. Left ventricular volume analysis as a basic tool to describe cardiac function.

    PubMed

    Kerkhof, Peter L M; Kuznetsova, Tatiana; Ali, Rania; Handly, Neal

    2018-03-01

    The heart is often regarded as a compression pump. Therefore, determination of pressure and volume is essential for cardiac function analysis. Traditionally, ventricular performance was described in terms of the Starling curve, i.e., output related to input. This view is based on two variables (namely, stroke volume and end-diastolic volume), often studied in the isolated (i.e., denervated) heart, and has dominated the interpretation of cardiac mechanics over the last century. The ratio of the prevailing coordinates within that paradigm is termed ejection fraction (EF), which is the popular metric routinely used in the clinic. Here we present an insightful alternative approach while describing volume regulation by relating end-systolic volume (ESV) to end-diastolic volume. This route obviates the undesired use of metrics derived from differences or ratios, as employed in previous models. We illustrate basic principles concerning ventricular volume regulation by data obtained from intact animal experiments and collected in healthy humans. Special attention is given to sex-specific differences. The method can be applied to the dynamics of a single heart and to an ensemble of individuals. Group analysis allows for stratification regarding sex, age, medication, and additional clinically relevant covariates. A straightforward procedure derives the relationship between EF and ESV and describes myocardial oxygen consumption in terms of ESV. This representation enhances insight and reduces the impact of the metric EF, in favor of the end-systolic elastance concept advanced 4 decades ago.

  20. A Classical Based Derivation of Time Dilation Providing First Order Accuracy to Schwarzschild's Solution of Einstein's Field Equations

    NASA Astrophysics Data System (ADS)

    Austin, Rickey W.

    In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.

  1. Symplectic potentials and resolved Ricci-flat ACG metrics

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Aswin K.; Govindarajan, Suresh; Gowdigere, Chethan N.

    2007-12-01

    We pursue the symplectic description of toric Kähler manifolds. There exists a general local classification of metrics on toric Kähler manifolds equipped with Hamiltonian 2-forms due to Apostolov, Calderbank and Gauduchon (ACG). We derive the symplectic potential for these metrics. Using a method due to Abreu, we relate the symplectic potential to the canonical potential written by Guillemin. This enables us to recover the moment polytope associated with metrics and we thus obtain global information about the metric. We illustrate these general considerations by focusing on six-dimensional Ricci-flat metrics and obtain Ricci-flat metrics associated with real cones over Lpqr and Ypq manifolds. The metrics associated with cones over Ypq manifolds turn out to be partially resolved with two blow-up parameters taking special (non-zero) values. For a fixed Ypq manifold, we find explicit metrics for several inequivalent blow-ups parametrized by a natural number k in the range 0 < k < p. We also show that all known examples of resolved metrics such as the resolved conifold and the resolution of {\\bb C}^3/{\\bb Z}_3 also fit the ACG classification.

  2. Tissue thickness calculation in ocular optical coherence tomography

    PubMed Central

    Alonso-Caneiro, David; Read, Scott A.; Vincent, Stephen J.; Collins, Michael J.; Wojtkowski, Maciej

    2016-01-01

    Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature. PMID:26977367

  3. Performance Metrics for Liquid Chromatography-Tandem Mass Spectrometry Systems in Proteomics Analyses*

    PubMed Central

    Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.

    2010-01-01

    A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981

  4. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  5. Characterization of Esophageal Motility Disorders in Children Presenting With Dysphagia Using High-Resolution Manometry.

    PubMed

    Edeani, Francis; Malik, Adeel; Kaul, Ajay

    2017-03-01

    The Chicago classification was based on metrics derived from studies in asymptomatic adult subjects. Our objectives were to characterize esophageal motility disorders in children and to determine whether the spectrum of manometric findings is similar between the pediatric and adult populations. Studies have suggested that the metrics utilized in manometric diagnosis depend on age, size, and manometric assembly. This would imply that a different set of metrics should be used for the pediatric population. There are no standardized and generally accepted metrics for use in the pediatric population, though there have been attempts to establish metrics specific to this population. Overall, we found that the distribution of esophageal motility disorders in children was like that described in adults using the Chicago classification. This analysis will serve as a prequel to follow-up studies exploring the individual metrics for variability among patients, with the objective of establishing novel metrics for the pediatric population.

  6. Beyond Lovelock gravity: Higher derivative metric theories

    NASA Astrophysics Data System (ADS)

    Crisostomi, M.; Noui, K.; Charmousis, C.; Langlois, D.

    2018-02-01

    We consider theories describing the dynamics of a four-dimensional metric, whose Lagrangian is diffeomorphism invariant and depends at most on second derivatives of the metric. Imposing degeneracy conditions we find a set of Lagrangians that, apart form the Einstein-Hilbert one, are either trivial or contain more than 2 degrees of freedom. Among the partially degenerate theories, we recover Chern-Simons gravity, endowed with constraints whose structure suggests the presence of instabilities. Then, we enlarge the class of parity violating theories of gravity by introducing new "chiral scalar-tensor theories." Although they all raise the same concern as Chern-Simons gravity, they can nevertheless make sense as low energy effective field theories or, by restricting them to the unitary gauge (where the scalar field is uniform), as Lorentz breaking theories with a parity violating sector.

  7. Accelerometer-derived activity correlates with volitional swimming speed in lake sturgeon (Acipenser fulvescens)

    USGS Publications Warehouse

    Thiem, J.D.; Dawson, J.W.; Gleiss, A.C.; Martins, E.G.; Haro, Alexander J.; Castro-Santos, Theodore R.; Danylchuk, A.J.; Wilson, R.P.; Cooke, S.J.

    2015-01-01

    Quantifying fine-scale locomotor behaviours associated with different activities is challenging for free-swimming fish.Biologging and biotelemetry tools can help address this problem. An open channel flume was used to generate volitionalswimming speed (Us) estimates of cultured lake sturgeon (Acipenser fulvescens Rafinesque, 1817) and these were paired withsimultaneously recorded accelerometer-derived metrics of activity obtained from three types of data-storage tags. This studyexamined whether a predictive relationship could be established between four different activity metrics (tail-beat frequency(TBF), tail-beat acceleration amplitude (TBAA), overall dynamic body acceleration (ODBA), and vectorial dynamic body acceleration(VeDBA)) and the swimming speed of A. fulvescens. Volitional Us of sturgeon ranged from 0.48 to 2.70 m·s−1 (0.51–3.18 bodylengths (BL) · s−1). Swimming speed increased linearly with all accelerometer-derived metrics, and when all tag types werecombined, Us increased 0.46 BL·s−1 for every 1 Hz increase in TBF, and 0.94, 0.61, and 0.94 BL·s−1 for every 1g increase in TBAA,ODBA, and VeDBA, respectively. Predictive relationships varied among tag types and tag-specific parameter estimates of Us arepresented for all metrics. This use of acceleration data-storage tags demonstrated their applicability for the field quantificationof sturgeon swimming speed.

  8. Effects of b-value and number of gradient directions on diffusion MRI measures obtained with Q-ball imaging

    NASA Astrophysics Data System (ADS)

    Schilling, Kurt G.; Nath, Vishwesh; Blaber, Justin; Harrigan, Robert L.; Ding, Zhaohua; Anderson, Adam W.; Landman, Bennett A.

    2017-02-01

    High-angular-resolution diffusion-weighted imaging (HARDI) MRI acquisitions have become common for use with higher order models of diffusion. Despite successes in resolving complex fiber configurations and probing microstructural properties of brain tissue, there is no common consensus on the optimal b-value and number of diffusion directions to use for these HARDI methods. While this question has been addressed by analysis of the diffusion-weighted signal directly, it is unclear how this translates to the information and metrics derived from the HARDI models themselves. Using a high angular resolution data set acquired at a range of b-values, and repeated 11 times on a single subject, we study how the b-value and number of diffusion directions impacts the reproducibility and precision of metrics derived from Q-ball imaging, a popular HARDI technique. We find that Q-ball metrics associated with tissue microstructure and white matter fiber orientation are sensitive to both the number of diffusion directions and the spherical harmonic representation of the Q-ball, and often are biased when under sampled. These results can advise researchers on appropriate acquisition and processing schemes, particularly when it comes to optimizing the number of diffusion directions needed for metrics derived from Q-ball imaging.

  9. Fixed Point Results for G-α-Contractive Maps with Application to Boundary Value Problems

    PubMed Central

    Roshan, Jamal Rezaei

    2014-01-01

    We unify the concepts of G-metric, metric-like, and b-metric to define new notion of generalized b-metric-like space and discuss its topological and structural properties. In addition, certain fixed point theorems for two classes of G-α-admissible contractive mappings in such spaces are obtained and some new fixed point results are derived in corresponding partially ordered space. Moreover, some examples and an application to the existence of a solution for the first-order periodic boundary value problem are provided here to illustrate the usability of the obtained results. PMID:24895655

  10. Type II universal spacetimes

    NASA Astrophysics Data System (ADS)

    Hervik, S.; Málek, T.; Pravda, V.; Pravdová, A.

    2015-12-01

    We study type II universal metrics of the Lorentzian signature. These metrics simultaneously solve vacuum field equations of all theories of gravitation with the Lagrangian being a polynomial curvature invariant constructed from the metric, the Riemann tensor and its covariant derivatives of an arbitrary order. We provide examples of type II universal metrics for all composite number dimensions. On the other hand, we have no examples for prime number dimensions and we prove the non-existence of type II universal spacetimes in five dimensions. We also present type II vacuum solutions of selected classes of gravitational theories, such as Lovelock, quadratic and L({{Riemann}}) gravities.

  11. Robinson-Trautman solutions to Einstein's equations

    NASA Astrophysics Data System (ADS)

    Davidson, William

    2017-02-01

    Solutions to Einstein's equations in the form of a Robinson-Trautman metric are presented. In particular, we derive a pure radiation solution which is non-stationary and involves a mass m, The resulting spacetime is of Petrov Type II A special selection of parametric values throws up the feature of the particle `rocket', a Type D metric. A suitable transformation of the complex coordinates allows the metrics to be expressed in real form. A modification, by setting m to zero, of the Type II metric thereby converting it to Type III, is then shown to admit a null Einstein-Maxwell electromagnetic field.

  12. Performance regression manager for large scale systems

    DOEpatents

    Faraj, Daniel A.

    2017-10-17

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  13. Topographic metric predictions of soil organic carbon in Iowa fields

    USDA-ARS?s Scientific Manuscript database

    Topography is one of the key factors affecting soil organic carbon (SOC) redistribution (erosion or deposition) because it influences the gravity-driven movement of soil by water flow and tillage operations. In this study, we examined impacts of sixteen topographic metrics derived from Light Detecti...

  14. Zone calculation as a tool for assessing performance outcome in laparoscopic suturing.

    PubMed

    Buckley, Christina E; Kavanagh, Dara O; Nugent, Emmeline; Ryan, Donncha; Traynor, Oscar J; Neary, Paul C

    2015-06-01

    Simulator performance is measured by metrics, which are valued as an objective way of assessing trainees. Certain procedures such as laparoscopic suturing, however, may not be suitable for assessment under traditionally formulated metrics. Our aim was to assess if our new metric is a valid method of assessing laparoscopic suturing. A software program was developed to order to create a new metric, which would calculate the percentage of time spent operating within pre-defined areas called "zones." Twenty-five candidates (medical students N = 10, surgical residents N = 10, and laparoscopic experts N = 5) performed the laparoscopic suturing task on the ProMIS III(®) simulator. New metrics of "in-zone" and "out-zone" scores as well as traditional metrics of time, path length, and smoothness were generated. Performance was also assessed by two blinded observers using the OSATS and FLS rating scales. This novel metric was evaluated by comparing it to both traditional metrics and subjective scores. There was a significant difference in the average in-zone and out-zone scores between all three experience groups (p < 0.05). The new zone metrics scores correlated significantly with the subjective-blinded observer scores of OSATS and FLS (p = 0.0001). The new zone metric scores also correlated significantly with the traditional metrics of path length, time, and smoothness (p < 0.05). The new metric is a valid tool for assessing laparoscopic suturing objectively. This could be incorporated into a competency-based curriculum to monitor resident progression in the simulated setting.

  15. Dynamic field theory and equations of motion in cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopeikin, Sergei M., E-mail: kopeikins@missouri.edu; Petrov, Alexander N., E-mail: alex.petrov55@gmail.com

    2014-11-15

    We discuss a field-theoretical approach based on general-relativistic variational principle to derive the covariant field equations and hydrodynamic equations of motion of baryonic matter governed by cosmological perturbations of dark matter and dark energy. The action depends on the gravitational and matter Lagrangian. The gravitational Lagrangian depends on the metric tensor and its first and second derivatives. The matter Lagrangian includes dark matter, dark energy and the ordinary baryonic matter which plays the role of a bare perturbation. The total Lagrangian is expanded in an asymptotic Taylor series around the background cosmological manifold defined as a solution of Einstein’s equationsmore » in the form of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric tensor. The small parameter of the decomposition is the magnitude of the metric tensor perturbation. Each term of the series expansion is gauge-invariant and all of them together form a basis for the successive post-Friedmannian approximations around the background metric. The approximation scheme is covariant and the asymptotic nature of the Lagrangian decomposition does not require the post-Friedmannian perturbations to be small though computationally it works the most effectively when the perturbed metric is close enough to the background FLRW metric. The temporal evolution of the background metric is governed by dark matter and dark energy and we associate the large scale inhomogeneities in these two components as those generated by the primordial cosmological perturbations with an effective matter density contrast δρ/ρ≤1. The small scale inhomogeneities are generated by the condensations of baryonic matter considered as the bare perturbations of the background manifold that admits δρ/ρ≫1. Mathematically, the large scale perturbations are given by the homogeneous solution of the linearized field equations while the small scale perturbations are described by a particular solution of these equations with the bare stress–energy tensor of the baryonic matter. We explicitly work out the covariant field equations of the successive post-Friedmannian approximations of Einstein’s equations in cosmology and derive equations of motion of large and small scale inhomogeneities of dark matter and dark energy. We apply these equations to derive the post-Friedmannian equations of motion of baryonic matter comprising stars, galaxies and their clusters.« less

  16. Comparisons of Derived Metrics from Computed Tomography (CT) Scanned Images of Fluvial Sediment from Gravel-Bed Flume Experiments

    NASA Astrophysics Data System (ADS)

    Voepel, Hal; Ahmed, Sharif; Hodge, Rebecca; Leyland, Julian; Sear, David

    2016-04-01

    Uncertainty in bedload estimates for gravel bed rivers is largely driven by our inability to characterize arrangement, orientation and resultant forces of fluvial sediment in river beds. Water working of grains leads to structural differences between areas of the bed through particle sorting, packing, imbrication, mortaring and degree of bed armoring. In this study, non-destructive, micro-focus X-ray computed tomography (CT) imaging in 3D is used to visualize, quantify and assess the internal geometry of sections of a flume bed that have been extracted keeping their fabric intact. Flume experiments were conducted at 1:1 scaling of our prototype river. From the volume, center of mass, points of contact, and protrusion of individual grains derived from 3D scan data we estimate 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and local grain exposure. Here metrics are derived for images from two flume experiments: one with a bed of coarse grains (>4mm) and the other where sand and clay were incorporated into the coarse flume bed. In addition to deriving force networks, comparison of metrics such as critical shear stress, pivot angles, grain distributions, principle axis orientation, and pore space over depth are made. This is the first time bed stability has been studied in 3D using CT scanned images of sediment from the bed surface to depths well into the subsurface. The derived metrics, inter-granular relationships and characterization of bed structures will lead to improved bedload estimates with reduced uncertainty, as well as improved understanding of relationships between sediment structure, grain size distribution and channel topography.

  17. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  18. AN EVALUATION OF OZONE EXPOSURE METRICS FOR A SEASONALLY DROUGHT STRESSED PONDEROSA PINE ECOSYSTEM. (R826601)

    EPA Science Inventory

    Ozone stress has become an increasingly significant factor in cases of forest decline reported throughout the world. Current metrics to estimate ozone exposure for forest trees are derived from atmospheric concentrations and assume that the forest is physiologically active at ...

  19. Deriving hourly surface energy fluxes and ET from Landsat Thematic mapper data using METRIC

    USDA-ARS?s Scientific Manuscript database

    Surface energy fluxes and evapotranspiration (ET) have long been recognized as playing an important role in determining exchanges of energy and mass between the hydrosphere, atmosphere, and biosphere. In this study, we applied the METRIC (Mapping ET at high Resolutions with Internal Calibration) alg...

  20. Linkages between Land Surface Phenology Metrics and Natural and Anthropogenic Events in Drylands (Invited)

    NASA Astrophysics Data System (ADS)

    de Beurs, K.; Brown, M. E.; Ahram, A.; Walker, J.; Henebry, G. M.

    2013-12-01

    Tracking vegetation dynamics across landscapes using remote sensing, or 'land surface phenology,' is a key mechanism that allows us to understand ecosystem changes. Land surface phenology models rely on vegetation information from remote sensing, such as the datasets derived from the Advanced Very High Resolution Radiometer (AVHRR), the newer MODIS sensors on Aqua and Terra, and sometimes the higher spatial resolution Landsat data. Vegetation index data can aid in the assessment of variables such as the start of season, growing season length and overall growing season productivity. In this talk we use Landsat, MODIS and AVHRR data and derive growing season metrics based on land surface phenology models that couple vegetation indices with satellite derived accumulated growing degreeday and evapotranspiration estimates. We calculate the timing and the height of the peak of the growing season and discuss the linkage of these land surface phenology metrics with natural and anthropogenic changes on the ground in dryland ecosystems. First we will discuss how the land surface phenology metrics link with annual and interannual price fluctuations in 229 markets distributed over Africa. Our results show that there is a significant correlation between the peak height of the growing season and price increases for markets in countries such as Nigeria, Somalia and Niger. We then demonstrate how land surface phenology metrics can improve models of post-conflict resolution in global drylands. We link the Uppsala Conflict Data Program's dataset of political, economic and social factors involved in civil war termination with an NDVI derived phenology metric and the Palmer Drought Severity Index (PDSI). An analysis of 89 individual conflicts in 42 dryland countries (totaling 892 individual country-years of data between 1982 and 2005) revealed that, even accounting for economic and political factors, countries that have higher NDVI growth following conflict have a lower risk of reverting to civil war. Finally, the patchy and heterogeneous arrangement of vegetation in dryland areas sometimes complicates the extraction of phenological signals using existing remote sensing data. We conclude by demonstrating how the phenological analysis of a range of dryland land cover classes benefits from the availability of synthetic images at Landsat spatial resolution and MODIS time intervals.

  1. 75 FR 7581 - RTO/ISO Performance Metrics; Notice Requesting Comments on RTO/ISO Performance Metrics

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... of staff from all the jurisdictional ISOs/RTOs to develop a set of performance metrics that the ISOs/RTOs will use to report annually to the Commission. Commission staff and representatives from the ISOs...

  2. Performance regression manager for large scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less

  3. Performance regression manager for large scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less

  4. METRIC model for the estimation and mapping of evapotranspiration in a super intensive olive orchard in Southern Portugal

    NASA Astrophysics Data System (ADS)

    Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.

    2013-04-01

    Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the algorithm were tested against data collected in the field, including eddy covariance ET, surface temperature over the canopy and soil temperature in shaded and sunlit conditions. Additionally, the results were also compared with results published in the literature. The information obtained so far revealed very interesting perspectives for the use of METRIC in the estimation and mapping of ET in super intensive olive orchards. Thereby, this approach might constitute a useful tool towards the improvement of the efficiency of irrigation water management in this crop. The study described is still under way, and thus further applications of METRIC algorithm to a larger number of images and to olive groves with different tree density are planned.

  5. Novel Biomarker for Evaluating Ischemic Stress Using an Electrogram Derived Phase Space

    PubMed Central

    Good, Wilson W.; Erem, Burak; Coll-Font, Jaume; Brooks, Dana H.; MacLeod, Rob S.

    2017-01-01

    The underlying pathophysiology of ischemia is poorly understood, resulting in unreliable clinical diagnosis of this disease. This limited knowledge of underlying mechanisms suggested a data driven approach, which seeks to identify patterns in the ECG data that can be linked statistically to underlying behavior and conditions of ischemic tissue. Previous studies have suggested that an approach known as Laplacian eigenmaps (LE) can identify trajectories, or manifolds, that are sensitive to different spatiotemporal consequences of ischemic stress, and thus serve as potential clinically relevant biomarkers. We applied the LE approach to measured transmural potentials in several canine preparations, recorded during control and ischemic conditions, and discovered regions on an approximated QRS-derived manifold that were sensitive to ischemia. By identifying a vector pointing to ischemia-associated changes to the manifold and measuring the shift in trajectories along that vector during ischemia, which we denote as Mshift, it was possible to also pull that vector back into signal space and determine which electrodes were responsible for driving the observed changes in the manifold. We refer to the signal space change as the manifold differential (Mdiff). Both the Mdiff and Mshift metrics show a similar degree of sensitivity to ischemic changes as standard metrics applied during the ST segment in detecting ischemic regions. The new metrics also were able to distinguish between sub-types of ischemia. Thus our results indicate that it may be possible to use the Mshift and Mdiff metrics along with ST derived metrics to determine whether tissue within the myocardium is ischemic or not. PMID:28451594

  6. Novel Biomarker for Evaluating Ischemic Stress Using an Electrogram Derived Phase Space.

    PubMed

    Good, Wilson W; Erem, Burak; Coll-Font, Jaume; Brooks, Dana H; MacLeod, Rob S

    2016-09-01

    The underlying pathophysiology of ischemia is poorly understood, resulting in unreliable clinical diagnosis of this disease. This limited knowledge of underlying mechanisms suggested a data driven approach, which seeks to identify patterns in the ECG data that can be linked statistically to underlying behavior and conditions of ischemic tissue. Previous studies have suggested that an approach known as Laplacian eigenmaps (LE) can identify trajectories, or manifolds, that are sensitive to different spatiotemporal consequences of ischemic stress, and thus serve as potential clinically relevant biomarkers. We applied the LE approach to measured transmural potentials in several canine preparations, recorded during control and ischemic conditions, and discovered regions on an approximated QRS-derived manifold that were sensitive to ischemia. By identifying a vector pointing to ischemia-associated changes to the manifold and measuring the shift in trajectories along that vector during ischemia, which we denote as Mshift, it was possible to also pull that vector back into signal space and determine which electrodes were responsible for driving the observed changes in the manifold. We refer to the signal space change as the manifold differential (Mdiff). Both the Mdiff and Mshift metrics show a similar degree of sensitivity to ischemic changes as standard metrics applied during the ST segment in detecting ischemic regions. The new metrics also were able to distinguish between sub-types of ischemia. Thus our results indicate that it may be possible to use the Mshift and Mdiff metrics along with ST derived metrics to determine whether tissue within the myocardium is ischemic or not.

  7. SU-F-BRB-07: A Plan Comparison Tool to Ensure Robustness and Deliverability in Online-Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, P; Labby, Z; Bayliss, R A

    Purpose: To develop a plan comparison tool that will ensure robustness and deliverability through analysis of baseline and online-adaptive radiotherapy plans using similarity metrics. Methods: The ViewRay MRIdian treatment planning system allows export of a plan file that contains plan and delivery information. A software tool was developed to read and compare two plans, providing information and metrics to assess their similarity. In addition to performing direct comparisons (e.g. demographics, ROI volumes, number of segments, total beam-on time), the tool computes and presents histograms of derived metrics (e.g. step-and-shoot segment field sizes, segment average leaf gaps). Such metrics were investigatedmore » for their ability to predict that an online-adapted plan reasonably similar to a baseline plan where deliverability has already been established. Results: In the realm of online-adaptive planning, comparing ROI volumes offers a sanity check to verify observations found during contouring. Beyond ROI analysis, it has been found that simply editing contours and re-optimizing to adapt treatment can produce a delivery that is substantially different than the baseline plan (e.g. number of segments increased by 31%), with no changes in optimization parameters and only minor changes in anatomy. Currently the tool can quickly identify large omissions or deviations from baseline expectations. As our online-adaptive patient population increases, we will continue to develop and refine quantitative acceptance criteria for adapted plans and relate them historical delivery QA measurements. Conclusion: The plan comparison tool is in clinical use and reports a wide range of comparison metrics, illustrating key differences between two plans. This independent check is accomplished in seconds and can be performed in parallel to other tasks in the online-adaptive workflow. Current use prevents large planning or delivery errors from occurring, and ongoing refinements will lead to increased assurance of plan quality.« less

  8. How to measure ecosystem stability? An evaluation of the reliability of stability metrics based on remote sensing time series across the major global ecosystems.

    PubMed

    De Keersmaecker, Wanda; Lhermitte, Stef; Honnay, Olivier; Farifteh, Jamshid; Somers, Ben; Coppin, Pol

    2014-07-01

    Increasing frequency of extreme climate events is likely to impose increased stress on ecosystems and to jeopardize the services that ecosystems provide. Therefore, it is of major importance to assess the effects of extreme climate events on the temporal stability (i.e., the resistance, the resilience, and the variance) of ecosystem properties. Most time series of ecosystem properties are, however, affected by varying data characteristics, uncertainties, and noise, which complicate the comparison of ecosystem stability metrics (ESMs) between locations. Therefore, there is a strong need for a more comprehensive understanding regarding the reliability of stability metrics and how they can be used to compare ecosystem stability globally. The objective of this study was to evaluate the performance of temporal ESMs based on time series of the Moderate Resolution Imaging Spectroradiometer derived Normalized Difference Vegetation Index of 15 global land-cover types. We provide a framework (i) to assess the reliability of ESMs in function of data characteristics, uncertainties and noise and (ii) to integrate reliability estimates in future global ecosystem stability studies against climate disturbances. The performance of our framework was tested through (i) a global ecosystem comparison and (ii) an comparison of ecosystem stability in response to the 2003 drought. The results show the influence of data quality on the accuracy of ecosystem stability. White noise, biased noise, and trends have a stronger effect on the accuracy of stability metrics than the length of the time series, temporal resolution, or amount of missing values. Moreover, we demonstrate the importance of integrating reliability estimates to interpret stability metrics within confidence limits. Based on these confidence limits, other studies dealing with specific ecosystem types or locations can be put into context, and a more reliable assessment of ecosystem stability against environmental disturbances can be obtained. © 2013 John Wiley & Sons Ltd.

  9. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  10. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  11. Comparison of leaf-on and leaf-off ALS data for mapping riparian tree species

    NASA Astrophysics Data System (ADS)

    Laslier, Marianne; Ba, Antoine; Hubert-Moy, Laurence; Dufour, Simon

    2017-10-01

    Forest species composition is a fundamental indicator of forest study and management. However, describing forest species composition at large scales and of highly diverse populations remains an issue for which remote sensing can provide significant contribution, in particular, Airborne Laser Scanning (ALS) data. Riparian corridors are good examples of highly valuable ecosystems, with high species richness and large surface areas that can be time consuming and expensive to monitor with in situ measurements. Remote sensing could be useful to study them, but few studies have focused on monitoring riparian tree species using ALS data. This study aimed to determine which metrics derived from ALS data are best suited to identify and map riparian tree species. We acquired very high density leaf-on and leaf-off ALS data along the Sélune River (France). In addition, we inventoried eight main riparian deciduous tree species along the study site. After manual segmentation of the inventoried trees, we extracted 68 morphological and structural metrics from both leaf-on and leaf-off ALS point clouds. Some of these metrics were then selected using Sequential Forward Selection (SFS) algorithm. Support Vector Machine (SVM) classification results showed good accuracy with 7 metrics (0.77). Both leaf-on and leafoff metrics were kept as important metrics for distinguishing tree species. Results demonstrate the ability of 3D information derived from high density ALS data to identify riparian tree species using external and internal structural metrics. They also highlight the complementarity of leaf-on and leaf-off Lidar data for distinguishing riparian tree species.

  12. Impact of Different Economic Performance Metrics on the Perceived Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2011-10-01

    Photovoltaic (PV) systems are installed by several types of market participants, ranging from residential customers to large-scale project developers and utilities. Each type of market participant frequently uses a different economic performance metric to characterize PV value because they are looking for different types of returns from a PV investment. This report finds that different economic performance metrics frequently show different price thresholds for when a PV investment becomes profitable or attractive. Several project parameters, such as financing terms, can have a significant impact on some metrics [e.g., internal rate of return (IRR), net present value (NPV), and benefit-to-cost (B/C)more » ratio] while having a minimal impact on other metrics (e.g., simple payback time). As such, the choice of economic performance metric by different customer types can significantly shape each customer's perception of PV investment value and ultimately their adoption decision.« less

  13. An exploratory survey of methods used to develop measures of performance

    NASA Astrophysics Data System (ADS)

    Hamner, Kenneth L.; Lafleur, Charles A.

    1993-09-01

    Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.

  14. Generalized contractive mappings and weakly α-admissible pairs in G-metric spaces.

    PubMed

    Hussain, N; Parvaneh, V; Hoseini Ghoncheh, S J

    2014-01-01

    The aim of this paper is to present some coincidence and common fixed point results for generalized (ψ, φ)-contractive mappings using partially weakly G-α-admissibility in the setup of G-metric space. As an application of our results, periodic points of weakly contractive mappings are obtained. We also derive certain new coincidence point and common fixed point theorems in partially ordered G-metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results.

  15. Generalized Contractive Mappings and Weakly α-Admissible Pairs in G-Metric Spaces

    PubMed Central

    Hussain, N.; Parvaneh, V.; Hoseini Ghoncheh, S. J.

    2014-01-01

    The aim of this paper is to present some coincidence and common fixed point results for generalized (ψ, φ)-contractive mappings using partially weakly G-α-admissibility in the setup of G-metric space. As an application of our results, periodic points of weakly contractive mappings are obtained. We also derive certain new coincidence point and common fixed point theorems in partially ordered G-metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25202742

  16. Specification and implementation of IFC based performance metrics to support building life cycle assessment of hybrid energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrissey, Elmer; O'Donnell, James; Keane, Marcus

    2004-03-29

    Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less

  17. Does stereo-endoscopy improve neurosurgical targeting in 3rd ventriculostomy?

    NASA Astrophysics Data System (ADS)

    Abhari, Kamyar; de Ribaupierre, Sandrine; Peters, Terry; Eagleson, Roy

    2011-03-01

    Endoscopic third ventriculostomy is a minimally invasive surgical technique to treat hydrocephalus; a condition where patients suffer from excessive amounts of cerebrospinal fluid (CSF) in the ventricular system of their brain. This technique involves using a monocular endoscope to locate the third ventricle, where a hole can be made to drain excessive fluid. Since a monocular endoscope provides only a 2D view, it is difficult to make this perforation due to the lack of monocular cues and depth perception. In a previous study, we had investigated the use of a stereo-endoscope to allow neurosurgeons to locate and avoid hazardous areas on the surface of the third ventricle. In this paper, we extend our previous study by developing a new methodology to evaluate the targeting performance in piercing the hole in the membrane. We consider the accuracy of this surgical task and derive an index of performance for a task which does not have a well-defined position or width of target. Our performance metric is sensitive and can distinguish between experts and novices. We make use of this metric to demonstrate an objective learning curve on this task for each subject.

  18. Comparing land surface phenology derived from satellite and GPS network microwave remote sensing.

    PubMed

    Jones, Matthew O; Kimball, John S; Small, Eric E; Larson, Kristine M

    2014-08-01

    The land surface phenology (LSP) start of season (SOS) metric signals the seasonal onset of vegetation activity, including canopy growth and associated increases in land-atmosphere water, energy and carbon (CO2) exchanges influencing weather and climate variability. The vegetation optical depth (VOD) parameter determined from satellite passive microwave remote sensing provides for global LSP monitoring that is sensitive to changes in vegetation canopy water content and biomass, and insensitive to atmosphere and solar illumination constraints. Direct field measures of canopy water content and biomass changes desired for LSP validation are generally lacking due to the prohibitive costs of maintaining regional monitoring networks. Alternatively, a normalized microwave reflectance index (NMRI) derived from GPS base station measurements is sensitive to daily vegetation water content changes and may provide for effective microwave LSP validation. We compared multiyear (2007-2011) NMRI and satellite VOD records at over 300 GPS sites in North America, and their derived SOS metrics for a subset of 24 homogenous land cover sites to investigate VOD and NMRI correspondence, and potential NMRI utility for LSP validation. Significant correlations (P<0.05) were found at 276 of 305 sites (90.5 %), with generally favorable correspondence in the resulting SOS metrics (r (2)=0.73, P<0.001, RMSE=36.8 days). This study is the first attempt to compare satellite microwave LSP metrics to a GPS network derived reflectance index and highlights both the utility and limitations of the NMRI data for LSP validation, including spatial scale discrepancies between local NMRI measurements and relatively coarse satellite VOD retrievals.

  19. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  20. Phenocams bridge the gap between field and satellite observations in an arid grassland ecosystem

    USDA-ARS?s Scientific Manuscript database

    Near surface (i.e., camera) and satellite remote sensing metrics have become widely used indicators of plant growing seasons. While robust linkages have been established between field metrics and ecosystem exchange in many land cover types, assessment of how well remotely-derived season start and en...

  1. Quantitative Verse in a Quantity-Insensitive Language: Baif's "vers mesures."

    ERIC Educational Resources Information Center

    Bullock, Barbara E.

    1997-01-01

    Analysis of the quantitative metrical verse of French Renaissance poet Jean-Antoine de Baif finds that the metrics, often seen as unscannable and using an incomprehensible phonetic orthography, derive largely from a system that is accentual, with the orthography permitting the poet to encode quantitative distinctions that coincide with the meter.…

  2. Mass eigenstates in bimetric theory with matter coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt-May, Angnis, E-mail: angnis.schmidt-may@fysik.su.se

    2015-01-01

    In this paper we study the ghost-free bimetric action extended by a recently proposed coupling to matter through a composite metric. The equations of motion for this theory are derived using a method which avoids varying the square-root matrix that appears in the matter coupling. We make an ansatz for which the metrics are proportional to each other and find that it can solve the equations provided that one parameter in the action is fixed. In this case, the proportional metrics as well as the effective metric that couples to matter solve Einstein's equations of general relativity including a mattermore » source. Around these backgrounds we derive the quadratic action for perturbations and diagonalize it into generalized mass eigenstates. It turns out that matter only interacts with the massless spin-2 mode whose equation of motion has exactly the form of the linearized Einstein equations, while the field with Fierz-Pauli mass term is completely decoupled. Hence, bimetric theory, with one parameter fixed such that proportional solutions exist, is degenerate with general relativity up to linear order around these backgrounds.« less

  3. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    PubMed

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  4. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors

    PubMed Central

    Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel

    2017-01-01

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065

  5. Symmetry-based detection and diagnosis of DCIS in breast MRI

    NASA Astrophysics Data System (ADS)

    Srikantha, Abhilash; Harz, Markus T.; Newstead, Gillian; Wang, Lei; Platel, Bram; Hegenscheid, Katrin; Mann, Ritse M.; Hahn, Horst K.; Peitgen, Heinz-Otto

    2013-02-01

    The delineation and diagnosis of non-mass-like lesions, most notably DCIS (ductal carcinoma in situ), is among the most challenging tasks in breast MRI reading. Even for human observers, DCIS is not always easy to diferentiate from patterns of active parenchymal enhancement or from benign alterations of breast tissue. In this light, it is no surprise that CADe/CADx approaches often completely fail to classify DCIS. Of the several approaches that have tried to devise such computer aid, none achieve performances similar to mass detection and classification in terms of sensitivity and specificity. In our contribution, we show a novel approach to combine a newly proposed metric of anatomical breast symmetry calculated on subtraction images of dynamic contrast-enhanced (DCE) breast MRI, descriptive kinetic parameters, and lesion candidate morphology to achieve performances comparable to computer-aided methods used for masses. We have based the development of the method on DCE MRI data of 18 DCIS cases with hand-annotated lesions, complemented by DCE-MRI data of nine normal cases. We propose a novel metric to quantify the symmetry of contralateral breasts and derive a strong indicator for potentially malignant changes from this metric. Also, we propose a novel metric for the orientation of a finding towards a fix point (the nipple). Our combined scheme then achieves a sensitivity of 89% with a specificity of 78%, matching CAD results for breast MRI on masses. The processing pipeline is intended to run on a CAD server, hence we designed all processing to be automated and free of per-case parameters. We expect that the detection results of our proposed non-mass aimed algorithm will complement other CAD algorithms, or ideally be joined with them in a voting scheme.

  6. Accounting for regional variation in both natural environment and human disturbance to improve performance of multimetric indices of lotic benthic diatoms.

    PubMed

    Tang, Tao; Stevenson, R Jan; Infante, Dana M

    2016-10-15

    Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    NASA Astrophysics Data System (ADS)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite precipitation product, CMORPH, against the U.S. daily precipitation analysis of Climate Prediction Center (CPC) at a daily and .25o scale over the Western U.S.

  8. A novel metric for quantification of homogeneous and heterogeneous tumors in PET for enhanced clinical outcome prediction

    NASA Astrophysics Data System (ADS)

    Rahmim, Arman; Schmidtlein, C. Ross; Jackson, Andrew; Sheikhbahaei, Sara; Marcus, Charles; Ashrafinia, Saeed; Soltani, Madjid; Subramaniam, Rathan M.

    2016-01-01

    Oncologic PET images provide valuable information that can enable enhanced prognosis of disease. Nonetheless, such information is simplified significantly in routine clinical assessment to meet workflow constraints. Examples of typical FDG PET metrics include: (i) SUVmax, (2) total lesion glycolysis (TLG), and (3) metabolic tumor volume (MTV). We have derived and implemented a novel metric for tumor quantification, inspired in essence by a model of generalized equivalent uniform dose as used in radiation therapy. The proposed metric, denoted generalized effective total uptake (gETU), is attractive as it encompasses the abovementioned commonly invoked metrics, and generalizes them, for both homogeneous and heterogeneous tumors, using a single parameter a. We evaluated this new metric for improved overall survival (OS) prediction on two different baseline FDG PET/CT datasets: (a) 113 patients with squamous cell cancer of the oropharynx, and (b) 72 patients with locally advanced pancreatic adenocarcinoma. Kaplan-Meier survival analysis was performed, where the subjects were subdivided into two groups using the median threshold, from which the hazard ratios (HR) were computed in Cox proportional hazards regression. For the oropharyngeal cancer dataset, MTV, TLG, SUVmax, SUVmean and SUVpeak produced HR values of 1.86, 3.02, 1.34, 1.36 and 1.62, while the proposed gETU metric for a  = 0.25 (greater emphasis on volume information) enabled significantly enhanced OS prediction with HR  =  3.94. For the pancreatic cancer dataset, MTV, TLG, SUVmax, SUVmean and SUVpeak resulted in HR values of 1.05, 1.25, 1.42, 1.45 and 1.52, while gETU at a  = 3.2 (greater emphasis on SUV information) arrived at an improved HR value of 1.61. Overall, the proposed methodology allows placement of differing degrees of emphasis on tumor volume versus uptake for different types of tumors to enable enhanced clinical outcome prediction.

  9. Conformally-flat, non-singular static metric in infinite derivative gravity

    NASA Astrophysics Data System (ADS)

    Buoninfante, Luca; Koshelev, Alexey S.; Lambiase, Gaetano; Marto, João; Mazumdar, Anupam

    2018-06-01

    In Einstein's theory of general relativity the vacuum solution yields a blackhole with a curvature singularity, where there exists a point-like source with a Dirac delta distribution which is introduced as a boundary condition in the static case. It has been known for a while that ghost-free infinite derivative theory of gravity can ameliorate such a singularity at least at the level of linear perturbation around the Minkowski background. In this paper, we will show that the Schwarzschild metric does not satisfy the boundary condition at the origin within infinite derivative theory of gravity, since a Dirac delta source is smeared out by non-local gravitational interaction. We will also show that the spacetime metric becomes conformally-flat and singularity-free within the non-local region, which can be also made devoid of an event horizon. Furthermore, the scale of non-locality ought to be as large as that of the Schwarzschild radius, in such a way that the gravitational potential in any metric has to be always bounded by one, implying that gravity remains weak from the infrared all the way up to the ultraviolet regime, in concurrence with the results obtained in [arXiv:1707.00273]. The singular Schwarzschild blackhole can now be potentially replaced by a non-singular compact object, whose core is governed by the mass and the effective scale of non-locality.

  10. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  11. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  12. Double metric, generalized metric, and α' -deformed double field theory

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-03-01

    We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.

  13. A practical approach to determine dose metrics for nanomaterials.

    PubMed

    Delmaar, Christiaan J E; Peijnenburg, Willie J G M; Oomen, Agnes G; Chen, Jingwen; de Jong, Wim H; Sips, Adriënne J A M; Wang, Zhuang; Park, Margriet V D Z

    2015-05-01

    Traditionally, administered mass is used to describe doses of conventional chemical substances in toxicity studies. For deriving toxic doses of nanomaterials, mass and chemical composition alone may not adequately describe the dose, because particles with the same chemical composition can have completely different toxic mass doses depending on properties such as particle size. Other dose metrics such as particle number, volume, or surface area have been suggested, but consensus is lacking. The discussion regarding the most adequate dose metric for nanomaterials clearly needs a systematic, unbiased approach to determine the most appropriate dose metric for nanomaterials. In the present study, the authors propose such an approach and apply it to results from in vitro and in vivo experiments with silver and silica nanomaterials. The proposed approach is shown to provide a convenient tool to systematically investigate and interpret dose metrics of nanomaterials. Recommendations for study designs aimed at investigating dose metrics are provided. © 2015 SETAC.

  14. Smart Grid Status and Metrics Report Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papersmore » covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.« less

  15. Effect of Atmospheric Turbulence on Synthetic Aperture LADAR Imaging Performance

    NASA Astrophysics Data System (ADS)

    Schumm, Bryce Eric

    Synthetic aperture LADAR (SAL) has been widely investigated over the last 15 years with many studies and experiments examining its performance. Comparatively little work has been done to investigate the effect of atmospheric turbulence on SAL performance. The turbulence work that has been accomplished is in related fields or under weak turbulence assumptions. This research investigates some of the fundamental limits of turbulence on SAL performance. Seven individual impact mechanisms of atmospheric turbulence are examined including: beam wander, beam growth, beam breakup, piston, coherence diameter/length, isoplanatic angle (anisoplanatism) and coherence time. Each component is investigated separately from the others through modeling to determine their respective effect on standard SAL image metrics. Analytic solutions were investigated for the SAL metrics of interest for each atmospheric impact mechanism. The isolation of each impact mechanism allows identification of mitigation techniques targeted at specific, and most dominant, sources of degradation. Results from this work will be critical in focusing future research on those effects which prove to be the most deleterious. Previous research proposed that the resolution of a SAL system was limited by the SAL coherence diameter/length r˜_0 which was derived from the average autocorrelation of the SAL phase history data. The present research confirms this through extensive wave optics simulations. A detailed study is conducted that shows, for long synthetic apertures, measuring the peak widths of individual phase histories may not accurately represent the true resolving power of the synthetic aperture. The SAL wave structure function and degree of coherence are investigated for individual turbulence mechanisms. Phase is shown to be an order of magnitude stronger than amplitude in its impact on imaging metrics. In all the analyses, piston variation and coherence diameter make up the majority of errors in SAL image formation.

  16. Indicators and Methods for Evaluating Economic, Ecosystem ...

    EPA Pesticide Factsheets

    The U.S. Human Well-being Index (HWBI) is a composite measure that incorporates economic, environmental, and societal well-being elements through the eight domains of connection to nature, cultural fulfillment, education, health, leisure time, living standards, safety and security, and social cohesion (USEPA 2012a; Smith et al. 2013). Twenty-eight services, represented by a collection of indicators and metrics, have been identified as influencing these domains of human well-being. By taking an inventory of stocks or measuring the results of a service, a relationship function can be derived to understand how changes in the provisioning of that service can influence the HWBI. An extensive review of existing services was performed to identify current services, indicators and metrics in use. This report describes the indicators and methods we have selected to evaluate the provisioning of economic, ecosystem, and social services related to human well-being. Provide metadata and methods for calculating services provisioning scores for HWBI modeling framework

  17. Telescoping Solar Array Concept for Achieving High Packaging Efficiency

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin; Pappa, Richard; Warren, Jay; Rose, Geoff

    2015-01-01

    Lightweight, high-efficiency solar arrays are required for future deep space missions using high-power Solar Electric Propulsion (SEP). Structural performance metrics for state-of-the art 30-50 kW flexible blanket arrays recently demonstrated in ground tests are approximately 40 kW/cu m packaging efficiency, 150 W/kg specific power, 0.1 Hz deployed stiffness, and 0.2 g deployed strength. Much larger arrays with up to a megawatt or more of power and improved packaging and specific power are of interest to mission planners for minimizing launch and life cycle costs of Mars exploration. A new concept referred to as the Compact Telescoping Array (CTA) with 60 kW/cu m packaging efficiency at 1 MW of power is described herein. Performance metrics as a function of array size and corresponding power level are derived analytically and validated by finite element analysis. Feasible CTA packaging and deployment approaches are also described. The CTA was developed, in part, to serve as a NASA reference solar array concept against which other proposed designs of 50-1000 kW arrays for future high-power SEP missions could be compared.

  18. Global groundwater sustainability as a function of reliability, resilience and vulnerability

    NASA Astrophysics Data System (ADS)

    Thomas, B. F.

    2017-12-01

    The world's largest aquifers are a fundamental source of freshwater used for agricultural irrigation and to meet human water needs. Therefore, their stored volume of groundwater are linked with water security, which becomes more relevant during periods of drought. This work focus on understanding large-scale groundwater changes, where we introduce an approach to evaluate groundwater sustainability at a global scale. We employ a groundwater drought index to assess performance metrics of sustainable use (reliability, resilience, vulnerability) for the largest and most productive global aquifers. Spatiotemporal changes in total water storage are derived from remote sensing observations of gravity anomalies, from which the groundwater drought index is inferred. The performance metrics are then combined into a sustainability index. The results reveal a complex relationship between these sustainable use indicators, while considering monthly variability in groundwater storage. Combining the drought and sustainability indexes, as presented in this work, constitutes a measure for quantifying groundwater sustainability. This framework integrates changes in groundwater resources as a function of human influences and climate changes, thus opening a path to assess both progress towards sustainable use and water security.

  19. Effects of Different Correlation Metrics and Preprocessing Factors on Small-World Brain Functional Networks: A Resting-State Functional MRI Study

    PubMed Central

    Liang, Xia; Wang, Jinhui; Yan, Chaogan; Shu, Ni; Xu, Ke; Gong, Gaolang; He, Yong

    2012-01-01

    Graph theoretical analysis of brain networks based on resting-state functional MRI (R-fMRI) has attracted a great deal of attention in recent years. These analyses often involve the selection of correlation metrics and specific preprocessing steps. However, the influence of these factors on the topological properties of functional brain networks has not been systematically examined. Here, we investigated the influences of correlation metric choice (Pearson's correlation versus partial correlation), global signal presence (regressed or not) and frequency band selection [slow-5 (0.01–0.027 Hz) versus slow-4 (0.027–0.073 Hz)] on the topological properties of both binary and weighted brain networks derived from them, and we employed test-retest (TRT) analyses for further guidance on how to choose the “best” network modeling strategy from the reliability perspective. Our results show significant differences in global network metrics associated with both correlation metrics and global signals. Analysis of nodal degree revealed differing hub distributions for brain networks derived from Pearson's correlation versus partial correlation. TRT analysis revealed that the reliability of both global and local topological properties are modulated by correlation metrics and the global signal, with the highest reliability observed for Pearson's-correlation-based brain networks without global signal removal (WOGR-PEAR). The nodal reliability exhibited a spatially heterogeneous distribution wherein regions in association and limbic/paralimbic cortices showed moderate TRT reliability in Pearson's-correlation-based brain networks. Moreover, we found that there were significant frequency-related differences in topological properties of WOGR-PEAR networks, and brain networks derived in the 0.027–0.073 Hz band exhibited greater reliability than those in the 0.01–0.027 Hz band. Taken together, our results provide direct evidence regarding the influences of correlation metrics and specific preprocessing choices on both the global and nodal topological properties of functional brain networks. This study also has important implications for how to choose reliable analytical schemes in brain network studies. PMID:22412922

  20. Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat

    PubMed Central

    de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred

    2015-01-01

    The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations in behaviour at the individual level. PMID:26107643

  1. Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.

    PubMed

    de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred

    2015-01-01

    The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations in behaviour at the individual level.

  2. Measuring phenological variability from satellite imagery

    USGS Publications Warehouse

    Reed, Bradley C.; Brown, Jesslyn F.; Vanderzee, D.; Loveland, Thomas R.; Merchant, James W.; Ohlen, Donald O.

    1994-01-01

    Vegetation phenological phenomena are closely related to seasonal dynamics of the lower atmosphere and are therefore important elements in global models and vegetation monitoring. Normalized difference vegetation index (NDVI) data derived from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (AVHRR) satellite sensor offer a means of efficiently and objectively evaluating phenological characteristics over large areas. Twelve metrics linked to key phenological events were computed based on time-series NDVI data collected from 1989 to 1992 over the conterminous United States. These measures include the onset of greenness, time of peak NDVI, maximum NDVI, rate of greenup, rate of senescence, and integrated NDVI. Measures of central tendency and variability of the measures were computed and analyzed for various land cover types. Results from the analysis showed strong coincidence between the satellite-derived metrics and predicted phenological characteristics. In particular, the metrics identified interannual variability of spring wheat in North Dakota, characterized the phenology of four types of grasslands, and established the phenological consistency of deciduous and coniferous forests. These results have implications for large- area land cover mapping and monitoring. The utility of re- motely sensed data as input to vegetation mapping is demonstrated by showing the distinct phenology of several land cover types. More stable information contained in ancillary data should be incorporated into the mapping process, particularly in areas with high phenological variability. In a regional or global monitoring system, an increase in variability in a region may serve as a signal to perform more detailed land cover analysis with higher resolution imagery.

  3. How much energy is locked in the USA? Alternative metrics for characterising the magnitude of overweight and obesity derived from BRFSS 2010 data.

    PubMed

    Reidpath, Daniel D; Masood, Mohd; Allotey, Pascale

    2014-06-01

    Four metrics to characterise population overweight are described. Behavioural Risk Factors Surveillance System data were used to estimate the weight the US population needed to lose to achieve a BMI < 25. The metrics for population level overweight were total weight, total volume, total energy, and energy value. About 144 million people in the US need to lose 2.4 million metric tonnes. The volume of fat is 2.6 billion litres-1,038 Olympic size swimming pools. The energy in the fat would power 90,000 households for a year and is worth around 162 million dollars. Four confronting ways of talking about a national overweight and obesity are described. The value of the metrics remains to be tested.

  4. Wake orientation and its influence on the performance of diffusers with inlet distortion

    NASA Astrophysics Data System (ADS)

    Coffman, Jesse M.

    Distortion at the inlet to diffusers is very common in internal flow applications. Inlet velocity distortion influences the pressure recovery and flow regimes of diffusers. This work introduced a centerline wake at the square inlet of a plane wall diffuser in two orthogonal orientations to investigate its influence on the diffuser performance. Two different wakes were generated. One was from a mesh strip which produced a velocity deficit with low turbulence intensity and two shear layers. The other wake generator was a D-shaped cylinder which produced a wake with high turbulence intensity and large length scales. These inlet conditions were generated for a diffuser with a diffusion angle of 3° and 6°. A pair of RANS simulations were used to investigate the influence of the orthogonal inlet orientations on the solution. The inlet conditions were taken from the inlet velocity field measured for the mesh strip. The flow development and exit conditions showed some similarities and some differences with the experimental results. The performance of a diffuser is typically measured through the static pressure recovery coefficient and the total pressure losses. The definition of these metrics commonly found in the literature were insufficient to discern differences between the wake orientations. New metrics were derived using the momentum flux profile parameter which related the static pressure recovery, the total pressure losses, and the velocity uniformity at the inlet and exit of the diffuser. These metrics revealed a trade-off between the total pressure losses and the uniformity of the velocity field.

  5. Grading the Metrics: Performance-Based Funding in the Florida State University System

    ERIC Educational Resources Information Center

    Cornelius, Luke M.; Cavanaugh, Terence W.

    2016-01-01

    A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…

  6. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination.

    PubMed

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-05-01

    The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=-2.487 (-2.040 to -0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=-2.272 (-0.028 to -0.002). ANOVA reported significant differences across years of experience (0-1, 1-2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.

  7. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  8. Hawking radiation as tunneling from squashed Kaluza-Klein black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuno, Ken; Umetsu, Koichiro

    2011-03-15

    We discuss Hawking radiation from a five-dimensional squashed Kaluza-Klein black hole on the basis of the tunneling mechanism. A simple method, which was recently suggested by Umetsu, may be used to extend the original derivation by Parikh and Wilczek to various black holes. That is, we use the two-dimensional effective metric, which is obtained by the dimensional reduction near the horizon, as the background metric. Using the same method, we derive both the desired result of the Hawking temperature and the effect of the backreaction associated with the radiation in the squashed Kaluza-Klein black hole background.

  9. Detecting Anisotropic Inclusions Through EIT

    NASA Astrophysics Data System (ADS)

    Cristina, Jan; Päivärinta, Lassi

    2017-12-01

    We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.

  10. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  11. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  12. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  13. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  14. Improving Climate Projections Using "Intelligent" Ensembles

    NASA Technical Reports Server (NTRS)

    Baker, Noel C.; Taylor, Patrick C.

    2015-01-01

    Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.

  15. Comparing masked target transform volume (MTTV) clutter metric to human observer evaluation of visual clutter

    NASA Astrophysics Data System (ADS)

    Camp, H. A.; Moyer, Steven; Moore, Richard K.

    2010-04-01

    The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.

  16. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  17. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  18. Quantification of MagLIF stagnation morphology using the Mallat Scattering Transformation

    NASA Astrophysics Data System (ADS)

    Glinsky, Michael; Weis, Matthew; Jennings, Christopher; Ampleford, David; Harding, Eric; Knapp, Patrick; Gomez, Matthew

    2017-10-01

    The morphology of the stagnated plasma resulting from MagLIF is measured by imaging the self-emission x-rays coming from the multi-keV plasma. Equivalent diagnostic response can be derived from integrated rad-hydro simulations from programs such as Hydra and Gorgon. There have been only limited quantitative ways to compare the image morphology, that is the texture, of the simulations to that of the experiments, to compare one experiment to another, or to compare one simulation to another. We have developed a metric of image morphology based on the Mallat Scattering Transformation, a transformation that has proved to be effective at distinguishing textures, sounds, and written characters. This metric has demonstrated excellent performance in classifying an ensemble of synthetic stagnations images. A good regression of the scattering coefficients to the parameters used to generate the synthetic images was found. Finally, the metric has been used to quantitatively compare simulations to experimental self-emission images. Sandia National Laboratories is a multi-mission laboratory managed and operated by NTESS, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the USDoEs NNSA under contract DE-NA0003525.

  19. Sobolev metrics on diffeomorphism groups and the derived geometry of spaces of submanifolds

    NASA Astrophysics Data System (ADS)

    Micheli, Mario; Michor, Peter W.; Mumford, David

    2013-06-01

    Given a finite-dimensional manifold N, the group \\operatorname{Diff}_{ S}(N) of diffeomorphisms diffeomorphism of N which decrease suitably rapidly to the identity, acts on the manifold B(M,N) of submanifolds of N of diffeomorphism-type M, where M is a compact manifold with \\operatorname{dim} M<\\operatorname{dim} N. Given the right-invariant weak Riemannian metric on \\operatorname{Diff}_{ S}(N) induced by a quite general operator L\\colon \\mathfrak{X}_{ S}(N)\\to \\Gamma(T^*N\\otimes\\operatorname{vol}(N)), we consider the induced weak Riemannian metric on B(M,N) and compute its geodesics and sectional curvature. To do this, we derive a covariant formula for the curvature in finite and infinite dimensions, we show how it makes O'Neill's formula very transparent, and we finally use it to compute the sectional curvature on B(M,N).

  20. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  1. Ideal AFROC and FROC observers.

    PubMed

    Khurd, Parmeshwar; Liu, Bin; Gindi, Gene

    2010-02-01

    Detection of multiple lesions in images is a medically important task and free-response receiver operating characteristic (FROC) analyses and its variants, such as alternative FROC (AFROC) analyses, are commonly used to quantify performance in such tasks. However, ideal observers that optimize FROC or AFROC performance metrics have not yet been formulated in the general case. If available, such ideal observers may turn out to be valuable for imaging system optimization and in the design of computer aided diagnosis techniques for lesion detection in medical images. In this paper, we derive ideal AFROC and FROC observers. They are ideal in that they maximize, amongst all decision strategies, the area, or any partial area, under the associated AFROC or FROC curve. Calculation of observer performance for these ideal observers is computationally quite complex. We can reduce this complexity by considering forms of these observers that use false positive reports derived from signal-absent images only. We also consider a Bayes risk analysis for the multiple-signal detection task with an appropriate definition of costs. A general decision strategy that minimizes Bayes risk is derived. With particular cost constraints, this general decision strategy reduces to the decision strategy associated with the ideal AFROC or FROC observer.

  2. Climate Classification is an Important Factor in ­Assessing Hospital Performance Metrics

    NASA Astrophysics Data System (ADS)

    Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.

    2017-12-01

    Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.

  3. An investigation of fighter aircraft agility

    NASA Technical Reports Server (NTRS)

    Valasek, John; Downing, David R.

    1993-01-01

    This report attempts to unify in a single document the results of a series of studies on fighter aircraft agility funded by the NASA Ames Research Center, Dryden Flight Research Facility and conducted at the University of Kansas Flight Research Laboratory during the period January 1989 through December 1993. New metrics proposed by pilots and the research community to assess fighter aircraft agility are collected and analyzed. The report develops a framework for understanding the context into which the various proposed fighter agility metrics fit in terms of application and testing. Since new metrics continue to be proposed, this report does not claim to contain every proposed fighter agility metric. Flight test procedures, test constraints, and related criteria are developed. Instrumentation required to quantify agility via flight test is considered, as is the sensitivity of the candidate metrics to deviations from nominal pilot command inputs, which is studied in detail. Instead of supplying specific, detailed conclusions about the relevance or utility of one candidate metric versus another, the authors have attempted to provide sufficient data and analyses for readers to formulate their own conclusions. Readers are therefore ultimately responsible for judging exactly which metrics are 'best' for their particular needs. Additionally, it is not the intent of the authors to suggest combat tactics or other actual operational uses of the results and data in this report. This has been left up to the user community. Twenty of the candidate agility metrics were selected for evaluation with high fidelity, nonlinear, non real-time flight simulation computer programs of the F-5A Freedom Fighter, F-16A Fighting Falcon, F-18A Hornet, and X-29A. The information and data presented on the 20 candidate metrics which were evaluated will assist interested readers in conducting their own extensive investigations. The report provides a definition and analysis of each metric; details of how to test and measure the metric, including any special data reduction requirements; typical values for the metric obtained using one or more aircraft types; and a sensitivity analysis if applicable. The report is organized as follows. The first chapter in the report presents a historical review of air combat trends which demonstrate the need for agility metrics in assessing the combat performance of fighter aircraft in a modern, all-aspect missile environment. The second chapter presents a framework for classifying each candidate metric according to time scale (transient, functional, instantaneous), further subdivided by axis (pitch, lateral, axial). The report is then broadly divided into two parts, with the transient agility metrics (pitch lateral, axial) covered in chapters three, four, and five, and the functional agility metrics covered in chapter six. Conclusions, recommendations, and an extensive reference list and biography are also included. Five appendices contain a comprehensive list of the definitions of all the candidate metrics; a description of the aircraft models and flight simulation programs used for testing the metrics; several relations and concepts which are fundamental to the study of lateral agility; an in-depth analysis of the axial agility metrics; and a derivation of the relations for the instantaneous agility and their approximations.

  4. Koeppen Bioclimatic Metrics for Evaluating CMIP5 Simulations of Historical Climate

    NASA Astrophysics Data System (ADS)

    Phillips, T. J.; Bonfils, C.

    2012-12-01

    The classic Koeppen bioclimatic classification scheme associates generic vegetation types (e.g. grassland, tundra, broadleaf or evergreen forests, etc.) with regional climate zones defined by the observed amplitude and phase of the annual cycles of continental temperature (T) and precipitation (P). Koeppen classification thus can provide concise, multivariate metrics for evaluating climate model performance in simulating the regional magnitudes and seasonalities of climate variables that are of critical importance for living organisms. In this study, 14 Koeppen vegetation types are derived from annual-cycle climatologies of T and P in some 3 dozen CMIP5 simulations of 1980-1999 climate, a period when observational data provides a reliable global validation standard. Metrics for evaluating the ability of the CMIP5 models to simulate the correct locations and areas of the vegetation types, as well as measures of overall model performance, also are developed. It is found that the CMIP5 models are most deficient in simulating 1) the climates of the drier zones (e.g. desert, savanna, grassland, steppe vegetation types) that are located in the Southwestern U.S. and Mexico, Eastern Europe, Southern Africa, and Central Australia, as well as 2) the climate of regions such as Central Asia and Western South America where topography plays a central role. (Detailed analysis of regional biases in the annual cycles of T and P of selected simulations exemplifying general model performance problems also are to be presented.) The more encouraging results include evidence for a general improvement in CMIP5 performance relative to that of older CMIP3 models. Within CMIP5 also, the more complex Earth Systems Models (ESMs) with prognostic biogeochemistry perform comparably to the corresponding global models that simulate only the "physical" climate. Acknowledgments This work was funded by the U.S. Department of Energy Office of Science and was performed at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786

  6. Prioritizing Urban Habitats for Connectivity Conservation: Integrating Centrality and Ecological Metrics.

    PubMed

    Poodat, Fatemeh; Arrowsmith, Colin; Fraser, David; Gordon, Ascelin

    2015-09-01

    Connectivity among fragmented areas of habitat has long been acknowledged as important for the viability of biological conservation, especially within highly modified landscapes. Identifying important habitat patches in ecological connectivity is a priority for many conservation strategies, and the application of 'graph theory' has been shown to provide useful information on connectivity. Despite the large number of metrics for connectivity derived from graph theory, only a small number have been compared in terms of the importance they assign to nodes in a network. This paper presents a study that aims to define a new set of metrics and compares these with traditional graph-based metrics, used in the prioritization of habitat patches for ecological connectivity. The metrics measured consist of "topological" metrics, "ecological metrics," and "integrated metrics," Integrated metrics are a combination of topological and ecological metrics. Eight metrics were applied to the habitat network for the fat-tailed dunnart within Greater Melbourne, Australia. A non-directional network was developed in which nodes were linked to adjacent nodes. These links were then weighted by the effective distance between patches. By applying each of the eight metrics for the study network, nodes were ranked according to their contribution to the overall network connectivity. The structured comparison revealed the similarity and differences in the way the habitat for the fat-tailed dunnart was ranked based on different classes of metrics. Due to the differences in the way the metrics operate, a suitable metric should be chosen that best meets the objectives established by the decision maker.

  7. Performance metrics for the assessment of satellite data products: an ocean color case study

    EPA Science Inventory

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...

  8. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  9. Performance Metrics for Soil Moisture Retrievals and Applications Requirements

    USDA-ARS?s Scientific Manuscript database

    Quadratic performance metrics such as root-mean-square error (RMSE) and time series correlation are often used to assess the accuracy of geophysical retrievals and true fields. These metrics are generally related; nevertheless each has advantages and disadvantages. In this study we explore the relat...

  10. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  11. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination

    PubMed Central

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-01-01

    Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005

  12. Immersive training and mentoring for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Nistor, Vasile; Allen, Brian; Dutson, E.; Faloutsos, P.; Carman, G. P.

    2007-04-01

    We describe in this paper a training system for minimally invasive surgery (MIS) that creates an immersive training simulation by recording the pathways of the instruments from an expert surgeon while performing an actual training task. Instrument spatial pathway data is stored and later accessed at the training station in order to visualize the ergonomic experience of the expert surgeon and trainees. Our system is based on tracking the spatial position and orientation of the instruments on the console for both the expert surgeon and the trainee. The technology is the result of recent developments in miniaturized position sensors that can be integrated seamlessly into the MIS instruments without compromising functionality. In order to continuously monitor the positions of laparoscopic tool tips, DC magnetic tracking sensors are used. A hardware-software interface transforms the coordinate data points into instrument pathways, while an intuitive graphic user interface displays the instruments spatial position and orientation for the mentor/trainee, and endoscopic video information. These data are recorded and saved in a database for subsequent immersive training and training performance analysis. We use two 6 DOF DC magnetic trackers with a sensor diameter of just 1.3 mm - small enough for insertion into 4 French catheters, embedded in the shaft of a endoscopic grasper and a needle driver. One sensor is located at the distal end of the shaft while the second sensor is located at the proximal end of the shaft. The placement of these sensors does not impede the functionally of the instrument. Since the sensors are located inside the shaft there are no sealing issues between the valve of the trocar and the instrument. We devised a peg transfer training task in accordance to validated training procedures, and tested our system on its ability to differentiate between the expert surgeon and the novices, based on a set of performance metrics. These performance metrics: motion smoothness, total path length, and time to completion, are derived from the kinematics of the instrument. An affine combination of the above mentioned metrics is provided to give a general score for the training performance. Clear differentiation between the expert surgeons and the novice trainees is visible in the test results. Strictly kinematics based performance metrics can be used to evaluate the training progress of MIS trainees in the context of UCLA - LTS.

  13. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  14. Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.

    PubMed

    Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K

    2018-06-15

    The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.

  15. Transfer of uncertainty of space-borne high resolution rainfall products at ungauged regions

    NASA Astrophysics Data System (ADS)

    Tang, Ling

    Hydrologically relevant characteristics of high resolution (˜ 0.25 degree, 3 hourly) satellite rainfall uncertainty were derived as a function of season and location using a six year (2002-2007) archive of National Aeronautics and Space Administration (NASA)'s Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) precipitation data. The Next Generation Radar (NEXRAD) Stage IV rainfall data over the continental United States was used as ground validation (GV) data. A geostatistical mapping scheme was developed and tested for transfer (i.e., spatial interpolation) of uncertainty information from GV regions to the vast non-GV regions by leveraging the error characterization work carried out in the earlier step. The open question explored here was, "If 'error' is defined on the basis of independent ground validation (GV) data, how are error metrics estimated for a satellite rainfall data product without the need for much extensive GV data?" After a quantitative analysis of the spatial and temporal structure of the satellite rainfall uncertainty, a proof-of-concept geostatistical mapping scheme (based on the kriging method) was evaluated. The idea was to understand how realistic the idea of 'transfer' is for the GPM era. It was found that it was indeed technically possible to transfer error metrics from a gauged to an ungauged location for certain error metrics and that a regionalized error metric scheme for GPM may be possible. The uncertainty transfer scheme based on a commonly used kriging method (ordinary kriging) was then assessed further at various timescales (climatologic, seasonal, monthly and weekly), and as a function of the density of GV coverage. The results indicated that if a transfer scheme for estimating uncertainty metrics was finer than seasonal scale (ranging from 3-6 hourly to weekly-monthly), the effectiveness for uncertainty transfer worsened significantly. Next, a comprehensive assessment of different kriging methods for spatial transfer (interpolation) of error metrics was performed. Three kriging methods for spatial interpolation are compared, which are: ordinary kriging (OK), indicator kriging (IK) and disjunctive kriging (DK). Additional comparison with the simple inverse distance weighting (IDW) method was also performed to quantify the added benefit (if any) of using geostatistical methods. The overall performance ranking of the kriging methods was found to be as follows: OK=DK > IDW > IK. Lastly, various metrics of satellite rainfall uncertainty were identified for two large continental landmasses that share many similar Koppen climate zones, United States and Australia. The dependence of uncertainty as a function of gauge density was then investigated. The investigation revealed that only the first and second ordered moments of error are most amenable to a Koppen-type climate type classification in different continental landmasses.

  16. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals.

    PubMed

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-21

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  17. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals

    NASA Astrophysics Data System (ADS)

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-01

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  18. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  19. Should regional ventilation function be considered during radiation treatment planning to prevent radiation-induced complications?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, Fujun; Jeudy, Jean; D’Souza, Warren

    Purpose: To investigate the incorporation of pretherapy regional ventilation function in predicting radiation fibrosis (RF) in stage III nonsmall cell lung cancer (NSCLC) patients treated with concurrent thoracic chemoradiotherapy. Methods: Thirty-seven patients with stage III NSCLC were retrospectively studied. Patients received one cycle of cisplatin–gemcitabine, followed by two to three cycles of cisplatin–etoposide concurrently with involved-field thoracic radiotherapy (46–66 Gy; 2 Gy/fraction). Pretherapy regional ventilation images of the lung were derived from 4D computed tomography via a density change–based algorithm with mass correction. In addition to the conventional dose–volume metrics (V{sub 20}, V{sub 30}, V{sub 40}, and mean lung dose),more » dose–function metrics (fV{sub 20}, fV{sub 30}, fV{sub 40}, and functional mean lung dose) were generated by combining regional ventilation and radiation dose. A new class of metrics was derived and referred to as dose–subvolume metrics (sV{sub 20}, sV{sub 30}, sV{sub 40}, and subvolume mean lung dose); these were defined as the conventional dose–volume metrics computed on the functional lung. Area under the receiver operating characteristic curve (AUC) values and logistic regression analyses were used to evaluate these metrics in predicting hallmark characteristics of RF (lung consolidation, volume loss, and airway dilation). Results: AUC values for the dose–volume metrics in predicting lung consolidation, volume loss, and airway dilation were 0.65–0.69, 0.57–0.70, and 0.69–0.76, respectively. The respective ranges for dose–function metrics were 0.63–0.66, 0.61–0.71, and 0.72–0.80 and for dose–subvolume metrics were 0.50–0.65, 0.65–0.75, and 0.73–0.85. Using an AUC value = 0.70 as cutoff value suggested that at least one of each type of metrics (dose–volume, dose–function, dose–subvolume) was predictive for volume loss and airway dilation, whereas lung consolidation cannot be accurately predicted by any of the metrics. Logistic regression analyses showed that dose–function and dose–subvolume metrics were significant (P values ≤ 0.02) in predicting volume airway dilation. Likelihood ratio test showed that when combining dose–function and/or dose–subvolume metrics with dose–volume metrics, the achieved improvements of prediction accuracy on volume loss and airway dilation were significant (P values ≤ 0.04). Conclusions: The authors’ results demonstrated that the inclusion of regional ventilation function improved accuracy in predicting RF. In particular, dose–subvolume metrics provided a promising method for preventing radiation-induced pulmonary complications.« less

  20. Overcoming the effects of false positives and threshold bias in graph theoretical analyses of neuroimaging data.

    PubMed

    Drakesmith, M; Caeyenberghs, K; Dutt, A; Lewis, G; David, A S; Jones, D K

    2015-09-01

    Graph theory (GT) is a powerful framework for quantifying topological features of neuroimaging-derived functional and structural networks. However, false positive (FP) connections arise frequently and influence the inferred topology of networks. Thresholding is often used to overcome this problem, but an appropriate threshold often relies on a priori assumptions, which will alter inferred network topologies. Four common network metrics (global efficiency, mean clustering coefficient, mean betweenness and smallworldness) were tested using a model tractography dataset. It was found that all four network metrics were significantly affected even by just one FP. Results also show that thresholding effectively dampens the impact of FPs, but at the expense of adding significant bias to network metrics. In a larger number (n=248) of tractography datasets, statistics were computed across random group permutations for a range of thresholds, revealing that statistics for network metrics varied significantly more than for non-network metrics (i.e., number of streamlines and number of edges). Varying degrees of network atrophy were introduced artificially to half the datasets, to test sensitivity to genuine group differences. For some network metrics, this atrophy was detected as significant (p<0.05, determined using permutation testing) only across a limited range of thresholds. We propose a multi-threshold permutation correction (MTPC) method, based on the cluster-enhanced permutation correction approach, to identify sustained significant effects across clusters of thresholds. This approach minimises requirements to determine a single threshold a priori. We demonstrate improved sensitivity of MTPC-corrected metrics to genuine group effects compared to an existing approach and demonstrate the use of MTPC on a previously published network analysis of tractography data derived from a clinical population. In conclusion, we show that there are large biases and instability induced by thresholding, making statistical comparisons of network metrics difficult. However, by testing for effects across multiple thresholds using MTPC, true group differences can be robustly identified. Copyright © 2015. Published by Elsevier Inc.

  1. Effects of urbanization on benthic macroinvertebrate assemblages in contrasting environmental settings: Boston, Massachusetts; Birmingham, Alabama; and Salt Lake City, Utah

    USGS Publications Warehouse

    Cuffney, T.F.; Zappia, H.; Giddings, E.M.P.; Coles, J.F.

    2005-01-01

    Responses of invertebrate assemblages along gradients of urban intensity were examined in three metropolitan areas with contrasting climates and topography (Boston, Massachusetts; Birmingham, Alabama; Salt Lake City, Utah). Urban gradients were defined using an urban intensity index (UII) derived from basin-scale population, infrastructure, land-use, land-cover, and socioeconomic characteristics. Responses based on assemblage metrics, indices of biotic integrity (B-IBI), and ordinations were readily detected in all three urban areas and many responses could be accurately predicted simply using regional UIIs. Responses to UII were linear and did not indicate any initial resistance to urbanization. Richness metrics were better indicators of urbanization than were density metrics. Metrics that were good indicators were specific to each study except for a richness-based tolerance metric (TOLr) and one B-IBI. Tolerances to urbanization were derived for 205 taxa. These tolerances differed among studies and with published tolerance values, but provided similar characterizations of site conditions. Basin-scale land-use changes were the most important variables for explaining invertebrate responses to urbanization. Some chemical and instream physical habitat variables were important in individual studies, but not among studies. Optimizing the study design to detect basin-scale effects may have reduced the ability to detect local-scale effects. ?? 2005 by the American Fisheries Society.

  2. Students' Understanding of the Function-Derivative Relationship When Learning Economic Concepts

    ERIC Educational Resources Information Center

    Ariza, Angel; Llinares, Salvador; Valls, Julia

    2015-01-01

    The aim of this study is to characterise students' understanding of the function-derivative relationship when learning economic concepts. To this end, we use a fuzzy metric (Chang 1968) to identify the development of economic concept understanding that is defined by the function-derivative relationship. The results indicate that the understanding…

  3. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  4. Cosmology of non-minimal derivative coupling to gravity in Palatini formalism and its chaotic inflation

    NASA Astrophysics Data System (ADS)

    Kaewkhao, Narakorn; Gumjudpai, Burin

    2018-06-01

    We consider, in Palatini formalism, a modified gravity of which the scalar field derivative couples to Einstein tensor. In this scenario, Ricci scalar, Ricci tensor and Einstein tensor are functions of connection field. As a result, the connection field gives rise to relation, hμν = fgμν between effective metric, hμν and the usual metric gμν where f = 1 - κϕ,αϕ,α / 2. In FLRW universe, NMDC coupling constant is limited in a range of - 2 /ϕ˙2 < κ ≤ ∞ preserving Lorentz signature of the effective metric. Slowly-rolling regime provides κ < 0 forbidding graviton from traveling at superluminal speed. Effective gravitational coupling and entropy of blackhole's apparent horizon are derived. In case of negative coupling, acceleration could happen even with weff > - 1 / 3. Power-law potentials of chaotic inflation are considered. For V ∝ϕ2 and V ∝ϕ4, it is possible to obtain tensor-to-scalar ratio lower than that of GR so that it satisfies r < 0 . 12 as constrained by Planck 2015 (Ade et al., 2016). The V ∝ϕ2 case yields acceptable range of spectrum index and r values. The quartic potential's spectrum index is disfavored by the Planck results. Viable range of κ for V ∝ϕ2 case lies in positive region, resulting in less blackhole's entropy, superluminal metric, more amount of inflation, avoidance of super-Planckian field initial value and stronger gravitational constant.

  5. Development of a multimetric index for fish assemblages in a cold tailwater in Tennessee

    USGS Publications Warehouse

    Ivasauskas, Tomas J.; Bettoli, Phillip William

    2014-01-01

    Tailwaters downstream of hypolimnetic-release hydropeaking dams exhibit a unique combination of stressors that affects the structure and function of resident fish assemblages. We developed a statistically and biologically defensible multimetric index of fish assemblages for the Caney Fork River below Center Hill Dam, Tennessee. Fish assemblages were sampled at five sites using boat-mounted and backpack electrofishing gear from fall 2009 through summer 2011. A multivariate statistical approach was used to select metrics that best reflected the downstream gradients in abiotic variables. Five metrics derived from boat electrofishing samples and four metrics derived from backpack electrofishing samples were selected for incorporation into the index based on their high correlation with environmental data. The nine metrics demonstrated predictable patterns of increase or decrease with increasing distance downstream of the dam. The multimetric index generally exhibited a pattern of increasing scores with increasing distance from the dam, indicating a downstream recovery gradient in fish assemblage composition. The index can be used to monitor anticipated changes in the fish communities of the Caney Fork River when repairs to Center Hill Dam are completed later this decade, resulting in altered dam operations.

  6. Cognitive Performance Scores for the Pediatric Automated Neuropsychological Assessment Metrics in Childhood-Onset Systemic Lupus Erythematosus.

    PubMed

    Vega-Fernandez, Patricia; Vanderburgh White, Shana; Zelko, Frank; Ruth, Natasha M; Levy, Deborah M; Muscal, Eyal; Klein-Gitelman, Marisa S; Huber, Adam M; Tucker, Lori B; Roebuck-Spencer, Tresa; Ying, Jun; Brunner, Hermine I

    2015-08-01

    To develop and initially validate a global cognitive performance score (CPS) for the Pediatric Automated Neuropsychological Assessment Metrics (PedANAM) to serve as a screening tool of cognition in childhood lupus. Patients (n = 166) completed the 9 subtests of the PedANAM battery, each of which provides 3 principal performance parameters (accuracy, mean reaction time for correct responses, and throughput). Cognitive ability was measured by formal neurocognitive testing or estimated by the Pediatric Perceived Cognitive Function Questionnaire-43 to determine the presence or absence of neurocognitive dysfunction (NCD). A subset of the data was used to develop 4 candidate PedANAM-CPS indices with supervised or unsupervised statistical approaches: PedANAM-CPSUWA , i.e., unweighted averages of the accuracy scores of all PedANAM subtests; PedANAM-CPSPCA , i.e., accuracy scores of all PedANAM subtests weighted through principal components analysis; PedANAM-CPSlogit , i.e., algorithm derived from logistic models to estimate NCD status based on the accuracy scores of all of the PedANAM subtests; and PedANAM-CPSmultiscore , i.e., algorithm derived from logistic models to estimate NCD status based on select PedANAM performance parameters. PedANAM-CPS candidates were validated using the remaining data. PedANAM-CPS indices were moderately correlated with each other (|r| > 0.65). All of the PedANAM-CPS indices discriminated children by NCD status across data sets (P < 0.036). The PedANAM-CPSmultiscore had the highest area under the receiver operating characteristic curve (AUC) across all data sets for identifying NCD status (AUC >0.74), followed by the PedANAM-CPSlogit , the PedANAM-CPSPCA , and the PedANAM-CPSUWA , respectively. Based on preliminary validation and considering ease of use, the PedANAM-CPSmultiscore and the PedANAM-CPSPCA appear to be best suited as global measures of PedANAM performance. © 2015, American College of Rheumatology.

  7. Vector autoregressive models: A Gini approach

    NASA Astrophysics Data System (ADS)

    Mussard, Stéphane; Ndiaye, Oumar Hamady

    2018-02-01

    In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.

  8. New Worlds Observer Telescope and Instrument Optical Design Concepts

    NASA Technical Reports Server (NTRS)

    Howard, Joseph; Kilston, Steve; Kendrick, Steve

    2008-01-01

    Optical design concepts for the telescope and instrumentation for NASA's New Worlds Observer program are presented. First order parameters are derived from the science requirements, and estimated performance metrics are shown using optical models. A four meter multiple channel telescope is discussed, as well as a suite of science instrument concepts. Wide field instrumentation (imager and spectrograph) would be accommodated by a three-mirror anastigmat telescope design. Planet finding and characterization would use a separate channel which is picked off after the first two mirrors (primary and secondary). Guiding concepts are also discussed.

  9. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  10. A Metric on Phylogenetic Tree Shapes

    PubMed Central

    Plazzotta, G.

    2018-01-01

    Abstract The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees’ branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. PMID:28472435

  11. On the new metrics for IMRT QA verification.

    PubMed

    Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo

    2016-11-01

    The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics. This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.

  12. Trend analysis of time-series phenology of North America derived from satellite data

    USGS Publications Warehouse

    Reed, B.C.

    2006-01-01

    Remote sensing information has been used in studies of the seasonal dynamics (phenology) of the land surface since the 1980s. While our understanding of remote sensing phenology is still in development, it is regarded as a key to understanding land-surface processes over large areas. Phenologic metrics, including start of season, end of season, duration of season, and seasonally integrated greenness, were derived from 8 km advanced very high resolution radiometer (AVHRR) data over North America spanning the years 1982-2003. Trend analysis was performed on annual summaries of the metrics to determine areas with increasing or decreasing growing season trends for the time period under study. Results show a trend toward earlier starts of season in limited areas of the mixed boreal forest, and a trend toward later end of season in well-defined areas of New England and southeastern Canada. Results in Saskatchewan, Canada, include a trend toward longer duration of season over a well-defined area, principally as a result of regional changes in land use practices. Changing seasonality appears to be an integrated response to a complex of factors, including climate change, but also, in many places, changes in land use practices. Copyright ?? 2006 by V. H. Winston & Son, Inc. All rights reserved.

  13. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  14. An Evaluation of the IntelliMetric[SM] Essay Scoring System

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine

    2006-01-01

    This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…

  15. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  16. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less

  17. Graph Metrics of Structural Brain Networks in Individuals with Schizophrenia and Healthy Controls: Group Differences, Relationships with Intelligence, and Genetics.

    PubMed

    Yeo, Ronald A; Ryman, Sephira G; van den Heuvel, Martijn P; de Reus, Marcel A; Jung, Rex E; Pommy, Jessica; Mayer, Andrew R; Ehrlich, Stefan; Schulz, S Charles; Morrow, Eric M; Manoach, Dara; Ho, Beng-Choon; Sponheim, Scott R; Calhoun, Vince D

    2016-02-01

    One of the most prominent features of schizophrenia is relatively lower general cognitive ability (GCA). An emerging approach to understanding the roots of variation in GCA relies on network properties of the brain. In this multi-center study, we determined global characteristics of brain networks using graph theory and related these to GCA in healthy controls and individuals with schizophrenia. Participants (N=116 controls, 80 patients with schizophrenia) were recruited from four sites. GCA was represented by the first principal component of a large battery of neurocognitive tests. Graph metrics were derived from diffusion-weighted imaging. The global metrics of longer characteristic path length and reduced overall connectivity predicted lower GCA across groups, and group differences were noted for both variables. Measures of clustering, efficiency, and modularity did not differ across groups or predict GCA. Follow-up analyses investigated three topological types of connectivity--connections among high degree "rich club" nodes, "feeder" connections to these rich club nodes, and "local" connections not involving the rich club. Rich club and local connectivity predicted performance across groups. In a subsample (N=101 controls, 56 patients), a genetic measure reflecting mutation load, based on rare copy number deletions, was associated with longer characteristic path length. Results highlight the importance of characteristic path lengths and rich club connectivity for GCA and provide no evidence for group differences in the relationships between graph metrics and GCA.

  18. Automated grading of lumbar disc degeneration via supervised distance metric learning

    NASA Astrophysics Data System (ADS)

    He, Xiaoxu; Landis, Mark; Leung, Stephanie; Warrington, James; Shmuilovich, Olga; Li, Shuo

    2017-03-01

    Lumbar disc degeneration (LDD) is a commonly age-associated condition related to low back pain, while its consequences are responsible for over 90% of spine surgical procedures. In clinical practice, grading of LDD by inspecting MRI is a necessary step to make a suitable treatment plan. This step purely relies on physicians manual inspection so that it brings the unbearable tediousness and inefficiency. An automated method for grading of LDD is highly desirable. However, the technical implementation faces a big challenge from class ambiguity, which is typical in medical image classification problems with a large number of classes. This typical challenge is derived from the complexity and diversity of medical images, which lead to a serious class overlapping and brings a great challenge in discriminating different classes. To solve this problem, we proposed an automated grading approach, which is based on supervised distance metric learning to classify the input discs into four class labels (0: normal, 1: slight, 2: marked, 3: severe). By learning distance metrics from labeled instances, an optimal distance metric is modeled and with two attractive advantages: (1) keeps images from the same classes close, and (2) keeps images from different classes far apart. The experiments, performed in 93 subjects, demonstrated the superiority of our method with accuracy 0.9226, sensitivity 0.9655, specificity 0.9083, F-score 0.8615. With our approach, physicians will be free from the tediousness and patients will be provided an effective treatment.

  19. Kinematics effectively delineate accomplished users of endovascular robotics with a physical training model.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean

    2015-02-01

    Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices <2 hours to moderate-extensive experience with >20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that translate to expert robotic navigation. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  20. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  1. Comparison of Image Restoration Methods for Lunar Epithermal Neutron Emission Mapping

    NASA Technical Reports Server (NTRS)

    McClanahan, T. P.; Ivatury, V.; Milikh, G.; Nandikotkur, G.; Puetter, R. C.; Sagdeev, R. Z.; Usikov, D.; Mitrofanov, I. G.

    2009-01-01

    Orbital measurements of neutrons by the Lunar Exploring Neutron Detector (LEND) onboard the Lunar Reconnaissance Orbiter are being used to quantify the spatial distribution of near surface hydrogen (H). Inferred H concentration maps have low signal-to-noise (SN) and image restoration (IR) techniques are being studied to enhance results. A single-blind. two-phase study is described in which four teams of researchers independently developed image restoration techniques optimized for LEND data. Synthetic lunar epithermal neutron emission maps were derived from LEND simulations. These data were used as ground truth to determine the relative quantitative performance of the IR methods vs. a default denoising (smoothing) technique. We review and used factors influencing orbital remote sensing of neutrons emitted from the lunar surface to develop a database of synthetic "true" maps for performance evaluation. A prior independent training phase was implemented for each technique to assure methods were optimized before the blind trial. Method performance was determined using several regional root-mean-square error metrics specific to epithermal signals of interest. Results indicate unbiased IR methods realize only small signal gains in most of the tested metrics. This suggests other physically based modeling assumptions are required to produce appreciable signal gains in similar low SN IR applications.

  2. Comparison of Aerial and Terrestrial Remote Sensing Techniques for Quantifying Forest Canopy Structural Complexity and Estimating Net Primary Productivity

    NASA Astrophysics Data System (ADS)

    Fahey, R. T.; Tallant, J.; Gough, C. M.; Hardiman, B. S.; Atkins, J.; Scheuermann, C. M.

    2016-12-01

    Canopy structure can be an important driver of forest ecosystem functioning - affecting factors such as radiative transfer and light use efficiency, and consequently net primary production (NPP). Both above- (aerial) and below-canopy (terrestrial) remote sensing techniques are used to assess canopy structure and each has advantages and disadvantages. Aerial techniques can cover large geographical areas and provide detailed information on canopy surface and canopy height, but are generally unable to quantitatively assess interior canopy structure. Terrestrial methods provide high resolution information on interior canopy structure and can be cost-effectively repeated, but are limited to very small footprints. Although these methods are often utilized to derive similar metrics (e.g., rugosity, LAI) and to address equivalent ecological questions and relationships (e.g., link between LAI and productivity), rarely are inter-comparisons made between techniques. Our objective is to compare methods for deriving canopy structural complexity (CSC) metrics and to assess the capacity of commonly available aerial remote sensing products (and combinations) to match terrestrially-sensed data. We also assess the potential to combine CSC metrics with image-based analysis to predict plot-based NPP measurements in forests of different ages and different levels of complexity. We use combinations of data from drone-based imagery (RGB, NIR, Red Edge), aerial LiDAR (commonly available medium-density leaf-off), terrestrial scanning LiDAR, portable canopy LiDAR, and a permanent plot network - all collected at the University of Michigan Biological Station. Our results will highlight the potential for deriving functionally meaningful CSC metrics from aerial imagery, LiDAR, and combinations of data sources. We will also present results of modeling focused on predicting plot-level NPP from combinations of image-based vegetation indices (e.g., NDVI, EVI) with LiDAR- or image-derived metrics of CSC (e.g., rugosity, porosity), canopy density, (e.g., LAI), and forest structure (e.g., canopy height). This work builds toward future efforts that will use other data combinations, such as those available at NEON sites, and could be used to inform and test popular ecosystem models (e.g., ED2) incorporating structure.

  3. Pan Sharpening Quality Investigation of Turkish In-Operation Remote Sensing Satellites: Applications with Rasat and GÖKTÜRK-2 Images

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Topan, Hüseyin; Cam, Ali; Bayık, Çağlar

    2016-10-01

    Recently two optical remote sensing satellites, RASAT and GÖKTÜRK-2, launched successfully by the Republic of Turkey. RASAT has 7.5 m panchromatic, and 15 m visible bands whereas GÖKTÜRK-2 has 2.5 m panchromatic and 5 m VNIR (Visible and Near Infrared) bands. These bands with various resolutions can be fused by pan-sharpening methods which is an important application area of optical remote sensing imagery. So that, the high geometric resolution of panchromatic band and the high spectral resolution of VNIR bands can be merged. In the literature there are many pan-sharpening methods. However, there is not a standard framework for quality investigation of pan-sharpened imagery. The aim of this study is to investigate pan-sharpening performance of RASAT and GÖKTÜRK-2 images. For this purpose, pan-sharpened images are generated using most popular pan-sharpening methods IHS, Brovey and PCA at first. This procedure is followed by quantitative evaluation of pan-sharpened images using Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE), Spectral Angle Mapper (SAM) and Erreur Relative Globale Adimensionnelle de Synthése (ERGAS) metrics. For generation of pan-sharpened images and computation of metrics SharpQ tool is used which is developed with MATLAB computing language. According to metrics, PCA derived pan-sharpened image is the most similar one to multispectral image for RASAT, and Brovey derived pan-sharpened image is the most similar one to multispectral image for GÖKTÜRK-2. Finally, pan-sharpened images are evaluated qualitatively in terms of object availability and completeness for various land covers (such as urban, forest and flat areas) by a group of operators who are experienced in remote sensing imagery.

  4. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  5. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  6. Research on cardiovascular disease prediction based on distance metric learning

    NASA Astrophysics Data System (ADS)

    Ni, Zhuang; Liu, Kui; Kang, Guixia

    2018-04-01

    Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.

  7. Vector-based navigation using grid-like representations in artificial agents.

    PubMed

    Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan

    2018-05-01

    Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.

  8. Anastomotic leak after colorectal resection: A population-based study of risk factors and hospital variation.

    PubMed

    Nikolian, Vahagn C; Kamdar, Neil S; Regenbogen, Scott E; Morris, Arden M; Byrn, John C; Suwanabol, Pasithorn A; Campbell, Darrell A; Hendren, Samantha

    2017-06-01

    Anastomotic leak is a major source of morbidity in colorectal operations and has become an area of interest in performance metrics. It is unclear whether anastomotic leak is associated primarily with surgeons' technical performance or explained better by patient characteristics and institutional factors. We sought to establish if anastomotic leak could serve as a valid quality metric in colorectal operations by evaluating provider variation after adjusting for patient factors. We performed a retrospective cohort study of colorectal resection patients in the Michigan Surgical Quality Collaborative. Clinically relevant patient and operative factors were tested for association with anastomotic leak. Hierarchical logistic regression was used to derive risk-adjusted rates of anastomotic leak. Of 9,192 colorectal resections, 244 (2.7%) had a documented anastomotic leak. The incidence of anastomotic leak was 3.0% for patients with pelvic anastomoses and 2.5% for those with intra-abdominal anastomoses. Multivariable analysis showed that a greater operative duration, male sex, body mass index >30 kg/m 2 , tobacco use, chronic immunosuppressive medications, thrombocytosis (platelet count >400 × 10 9 /L), and urgent/emergency operations were independently associated with anastomotic leak (C-statistic = 0.75). After accounting for patient and procedural risk factors, 5 hospitals had a significantly greater incidence of postoperative anastomotic leak. This population-based study shows that risk factors for anastomotic leak include male sex, obesity, tobacco use, immunosuppression, thrombocytosis, greater operative duration, and urgent/emergency operation; models including these factors predict most of the variation in anastomotic leak rates. This study suggests that anastomotic leak can serve as a valid metric that can identify opportunities for quality improvement. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A machine learning approach to multi-level ECG signal quality classification.

    PubMed

    Li, Qiao; Rajagopalan, Cadathur; Clifford, Gari D

    2014-12-01

    Current electrocardiogram (ECG) signal quality assessment studies have aimed to provide a two-level classification: clean or noisy. However, clinical usage demands more specific noise level classification for varying applications. This work outlines a five-level ECG signal quality classification algorithm. A total of 13 signal quality metrics were derived from segments of ECG waveforms, which were labeled by experts. A support vector machine (SVM) was trained to perform the classification and tested on a simulated dataset and was validated using data from the MIT-BIH arrhythmia database (MITDB). The simulated training and test datasets were created by selecting clean segments of the ECG in the 2011 PhysioNet/Computing in Cardiology Challenge database, and adding three types of real ECG noise at different signal-to-noise ratio (SNR) levels from the MIT-BIH Noise Stress Test Database (NSTDB). The MITDB was re-annotated for five levels of signal quality. Different combinations of the 13 metrics were trained and tested on the simulated datasets and the best combination that produced the highest classification accuracy was selected and validated on the MITDB. Performance was assessed using classification accuracy (Ac), and a single class overlap accuracy (OAc), which assumes that an individual type classified into an adjacent class is acceptable. An Ac of 80.26% and an OAc of 98.60% on the test set were obtained by selecting 10 metrics while 57.26% (Ac) and 94.23% (OAc) were the numbers for the unseen MITDB validation data without retraining. By performing the fivefold cross validation, an Ac of 88.07±0.32% and OAc of 99.34±0.07% were gained on the validation fold of MITDB. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Validation of a virtual reality-based robotic surgical skills curriculum.

    PubMed

    Connolly, Michael; Seligman, Johnathan; Kastenmeier, Andrew; Goldblatt, Matthew; Gould, Jon C

    2014-05-01

    The clinical application of robotic-assisted surgery (RAS) is rapidly increasing. The da Vinci Surgical System™ is currently the only commercially available RAS system. The skills necessary to perform robotic surgery are unique from those required for open and laparoscopic surgery. A validated laparoscopic surgical skills curriculum (fundamentals of laparoscopic surgery or FLS™) has transformed the way surgeons acquire laparoscopic skills. There is a need for a similar skills training and assessment tool specific for robotic surgery. Based on previously published data and expert opinion, we developed a robotic skills curriculum. We sought to evaluate this curriculum for evidence of construct validity (ability to discriminate between users of different skill levels). Four experienced surgeons (>20 RAS) and 20 novice surgeons (first-year medical students with no surgical or RAS experience) were evaluated. The curriculum comprised five tasks utilizing the da Vinci™ Skills Simulator (Pick and Place, Camera Targeting 2, Peg Board 2, Matchboard 2, and Suture Sponge 3). After an orientation to the robot and a period of acclimation in the simulator, all subjects completed three consecutive repetitions of each task. Computer-derived performance metrics included time, economy of motion, master work space, instrument collisions, excessive force, distance of instruments out of view, drops, missed targets, and overall scores (a composite of all metrics). Experienced surgeons significantly outperformed novice surgeons in most metrics. Statistically significant differences were detected for each task in regards to mean overall scores and mean time (seconds) to completion. The curriculum we propose is a valid method of assessing and distinguishing robotic surgical skill levels on the da Vinci Si™ Surgical System. Further study is needed to establish proficiency levels and to demonstrate that training on the simulator with the proposed curriculum leads to improved robotic surgical performance in the operating room.

  11. Feasibility of and Rationale for the Collection of Orthopaedic Trauma Surgery Quality of Care Metrics.

    PubMed

    Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip

    2017-06-01

    Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.

  12. Monitoring the Effects of Forest Restoration Treatments on Post-Fire Vegetation Recovery with MODIS Multitemporal Data

    PubMed Central

    van Leeuwen, Willem J. D.

    2008-01-01

    This study examines how satellite based time-series vegetation greenness data and phenological measurements can be used to monitor and quantify vegetation recovery after wildfire disturbances and examine how pre-fire fuel reduction restoration treatments impact fire severity and impact vegetation recovery trajectories. Pairs of wildfire affected sites and a nearby unburned reference site were chosen to measure the post-disturbance recovery in relation to climate variation. All site pairs were chosen in forested uplands in Arizona and were restricted to the area of the Rodeo-Chediski fire that occurred in 2002. Fuel reduction treatments were performed in 1999 and 2001. The inter-annual and seasonal vegetation dynamics before, during, and after wildfire events can be monitored using a time series of biweekly composited MODIS NDVI (Moderate Resolution Imaging Spectroradiometer - Normalized Difference Vegetation Index) data. Time series analysis methods included difference metrics, smoothing filters, and fitting functions that were applied to extract seasonal and inter-annual change and phenological metrics from the NDVI time series data from 2000 to 2007. Pre- and post-fire Landsat data were used to compute the Normalized Burn Ratio (NBR) and examine burn severity at the selected sites. The phenological metrics (pheno-metrics) included the timing and greenness (i.e. NDVI) for the start, peak and end of the growing season as well as proxy measures for the rate of green-up and senescence and the annual vegetation productivity. Pre-fire fuel reduction treatments resulted in lower fire severity, which reduced annual productivity much less than untreated areas within the Rodeo-Chediski fire perimeter. The seasonal metrics were shown to be useful for estimating the rate of post-fire disturbance recovery and the timing of phenological greenness phases. The use of satellite time series NDVI data and derived pheno-metrics show potential for tracking vegetation cover dynamics and successional changes in response to drought, wildfire disturbances, and forest restoration treatments in fire-suppressed forests. PMID:27879809

  13. Enhancing the Simplified Surface Energy Balance (SSEB) Approach for Estimating Landscape ET: Validation with the METRIC model

    USGS Publications Warehouse

    Senay, Gabriel B.; Budde, Michael E.; Verdin, James P.

    2011-01-01

    Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000 m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000 m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000 m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET.

  14. Enhancing the Simplified Surface Energy Balance (SSEB) approach for estimating landscape ET: Validation with the METRIC model

    USGS Publications Warehouse

    Senay, G.B.; Budde, M.E.; Verdin, J.P.

    2011-01-01

    Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET. ?? 2010.

  15. MATCHED FILTER COMPUTATION ON FPGA, CELL, AND GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAKER, ZACHARY K.; GOKHALE, MAYA B.; TRIPP, JUSTIN L.

    2007-01-08

    The matched filter is an important kernel in the processing of hyperspectral data. The filter enables researchers to sift useful data from instruments that span large frequency bands. In this work, they evaluate the performance of a matched filter algorithm implementation on accelerated co-processor (XD1000), the IBM Cell microprocessor, and the NVIDIA GeForce 6900 GTX GPU graphics card. They provide extensive discussion of the challenges and opportunities afforded by each platform. In particular, they explore the problems of partitioning the filter most efficiently between the host CPU and the co-processor. Using their results, they derive several performance metrics that providemore » the optimal solution for a variety of application situations.« less

  16. Performance specifications and six sigma theory: Clinical chemistry and industry compared.

    PubMed

    Oosterhuis, W P; Severens, M J M J

    2018-04-11

    Analytical performance specifications are crucial in test development and quality control. Although consensus has been reached on the use of biological variation to derive these specifications, no consensus has been reached which model should be preferred. The Six Sigma concept is widely applied in industry for quality specifications of products and can well be compared with Six Sigma models in clinical chemistry. However, the models for measurement specifications differ considerably between both fields: where the sigma metric is used in clinical chemistry, in industry the Number of Distinct Categories is used instead. In this study the models in both fields are compared and discussed. Copyright © 2018. Published by Elsevier Inc.

  17. Exact Harmonic Metric for a Uniformly Moving Schwarzschild Black Hole

    NASA Astrophysics Data System (ADS)

    He, Guan-Sheng; Lin, Wen-Bin

    2014-02-01

    The harmonic metric for Schwarzschild black hole with a uniform velocity is presented. In the limit of weak field and low velocity, this metric reduces to the post-Newtonian approximation for one moving point mass. As an application, we derive the dynamics of particle and photon in the weak-field limit for the moving Schwarzschild black hole with an arbitrary velocity. It is found that the relativistic motion of gravitational source can induce an additional centripetal force on the test particle, which may be comparable to or even larger than the conventional Newtonian gravitational force.

  18. Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.

    2003-01-01

    Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.

  19. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  20. Structural texture similarity metrics for image analysis and retrieval.

    PubMed

    Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L

    2013-07-01

    We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.

  1. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    PubMed

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  2. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.

  3. Metric for evaluation of filter efficiency in spectral cameras.

    PubMed

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  4. Google walkability: a new tool for local planning and public health research?

    PubMed

    Vargo, Jason; Stone, Brian; Glanz, Karen

    2012-07-01

    We investigate the association of different composite walkability measures with individual walking behaviors to determine if multicomponent metrics of walkability are more useful for assessing the health impacts of the built environment than single component measures. We use a previously published composite walkability measure as well as a new measure that was designed to represent easier methods of combination and which includes 2 metrics obtained using Google data sources. Logistic regression was used to assess the relationship between walking behavior and walkability metrics. Our results suggest that composite measures of walkability are more consistent predictors of walking behavior than single component measures. Furthermore, a walkability measure developed using free, publicly available data from Google was found to be nearly as effective in predicting walking outcomes as a walkability measure derived without such publicly and nationally available measures. Our findings demonstrate the effectiveness of free and locally relevant data for assessing walkable environments. This facilitates the use of locally derived and adaptive tools for evaluating the health impacts of the built environment.

  5. Equations for Scoring Rules When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document presents equations for scoring rules in a diagnostic and/or prognostic artificial-intelligence software system of the rule-based inference-engine type. The equations define a set of metrics that characterize the evaluation of a rule when data required for the antecedence clause(s) of the rule are missing. The metrics include a primary measure denoted the rule completeness metric (RCM) plus a number of subsidiary measures that contribute to the RCM. The RCM is derived from an analysis of a rule with respect to its truth and a measure of the completeness of its input data. The derivation is such that the truth value of an antecedent is independent of the measure of its completeness. The RCM can be used to compare the degree of completeness of two or more rules with respect to a given set of data. Hence, the RCM can be used as a guide to choosing among rules during the rule-selection phase of operation of the artificial-intelligence system..

  6. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems

    PubMed Central

    Liao, Yangzhe; Cai, Qing; Ai, Qingsong; Liu, Quan

    2018-01-01

    Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design. PMID:29419784

  7. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems.

    PubMed

    Liao, Yangzhe; Leeson, Mark S; Cai, Qing; Ai, Qingsong; Liu, Quan

    2018-02-08

    Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design.

  8. Tactile agnosia. Underlying impairment and implications for normal tactile object recognition.

    PubMed

    Reed, C L; Caselli, R J; Farah, M J

    1996-06-01

    In a series of experimental investigations of a subject with a unilateral impairment of tactile object recognition without impaired tactile sensation, several issues were addressed. First, is tactile agnosia secondary to a general impairment of spatial cognition? On tests of spatial ability, including those directed at the same spatial integration process assumed to be taxed by tactile object recognition, the subject performed well, implying a more specific impairment of high level, modality specific tactile perception. Secondly, within the realm of high level tactile perception, is there a distinction between the ability to derive shape ('what') and spatial ('where') information? Our testing showed an impairment confined to shape perception. Thirdly, what aspects of shape perception are impaired in tactile agnosia? Our results indicate that despite accurate encoding of metric length and normal manual exploration strategies, the ability tactually to perceive objects with the impaired hand, deteriorated as the complexity of shape increased. In addition, asymmetrical performance was not found for other body surfaces (e.g. her feet). Our results suggest that tactile shape perception can be disrupted independent of general spatial ability, tactile spatial ability, manual shape exploration, or even the precise perception of metric length in the tactile modality.

  9. Preliminary evaluation of a micro-based repeated measures testing system

    NASA Technical Reports Server (NTRS)

    Kennedy, Robert S.; Wilkes, Robert L.; Lane, Norman E.

    1985-01-01

    A need exists for an automated performance test system to study the effects of various treatments which are of interest to the aerospace medical community, i.e., the effects of drugs and environmental stress. The ethics and pragmatics of such assessment demand that repeated measures in small groups of subjects be the customary research paradigm. Test stability, reliability-efficiency and factor structure take on extreme significance; in a program of study by the U.S. Navy, 80 percent of 150 tests failed to meet minimum metric requirements. The best is being programmed on a portable microprocessor and administered along with tests in their original formats in order to examine their metric properties in the computerized mode. Twenty subjects have been tested over four replications on a 6.0 minute computerized battery (six tests) and which compared with five paper and pencil marker tests. All tests achieved stability within the four test sessions, reliability-efficiencies were high (r greater than .707 for three minutes testing), and the computerized tests were largely comparable to the paper and pencil version from which they were derived. This computerized performance test system is portable, inexpensive and rugged.

  10. Asset sustainability index : quick guide : proposed metrics for the long-term financial sustainability of highway networks.

    DOT National Transportation Integrated Search

    2013-04-01

    "This report provides a Quick Guide to the concept of asset sustainability metrics. Such metrics address the long-term performance of highway assets based upon expected expenditure levels. : It examines how such metrics are used in Australia, Britain...

  11. Diagnosis of pulmonary hypertension from magnetic resonance imaging-based computational models and decision tree analysis.

    PubMed

    Lungu, Angela; Swift, Andrew J; Capener, David; Kiely, David; Hose, Rod; Wild, Jim M

    2016-06-01

    Accurately identifying patients with pulmonary hypertension (PH) using noninvasive methods is challenging, and right heart catheterization (RHC) is the gold standard. Magnetic resonance imaging (MRI) has been proposed as an alternative to echocardiography and RHC in the assessment of cardiac function and pulmonary hemodynamics in patients with suspected PH. The aim of this study was to assess whether machine learning using computational modeling techniques and image-based metrics of PH can improve the diagnostic accuracy of MRI in PH. Seventy-two patients with suspected PH attending a referral center underwent RHC and MRI within 48 hours. Fifty-seven patients were diagnosed with PH, and 15 had no PH. A number of functional and structural cardiac and cardiovascular markers derived from 2 mathematical models and also solely from MRI of the main pulmonary artery and heart were integrated into a classification algorithm to investigate the diagnostic utility of the combination of the individual markers. A physiological marker based on the quantification of wave reflection in the pulmonary artery was shown to perform best individually, but optimal diagnostic performance was found by the combination of several image-based markers. Classifier results, validated using leave-one-out cross validation, demonstrated that combining computation-derived metrics reflecting hemodynamic changes in the pulmonary vasculature with measurement of right ventricular morphology and function, in a decision support algorithm, provides a method to noninvasively diagnose PH with high accuracy (92%). The high diagnostic accuracy of these MRI-based model parameters may reduce the need for RHC in patients with suspected PH.

  12. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.

  13. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  14. The use of player physical and technical skill match activity profiles to predict position in the Australian Football League draft.

    PubMed

    Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam

    2017-02-01

    This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.

  15. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  16. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  17. Mapping gullies, dunes, lava fields, and landslides via surface roughness

    NASA Astrophysics Data System (ADS)

    Korzeniowska, Karolina; Pfeifer, Norbert; Landtwing, Stephan

    2018-01-01

    Gully erosion is a widespread and significant process involved in soil and land degradation. Mapping gullies helps to quantify past, and anticipate future, soil losses. Digital terrain models offer promising data for automatically detecting and mapping gullies especially in vegetated areas, although methods vary widely measures of local terrain roughness are the most varied and debated among these methods. Rarely do studies test the performance of roughness metrics for mapping gullies, limiting their applicability to small training areas. To this end, we systematically explored how local terrain roughness derived from high-resolution Light Detection And Ranging (LiDAR) data can aid in the unsupervised detection of gullies over a large area. We also tested expanding this method for other landforms diagnostic of similarly abrupt land-surface changes, including lava fields, dunes, and landslides, as well as investigating the influence of different roughness thresholds, resolutions of kernels, and input data resolution, and comparing our method with previously published roughness algorithms. Our results show that total curvature is a suitable metric for recognising analysed gullies and lava fields from LiDAR data, with comparable success to that of more sophisticated roughness metrics. Tested dunes or landslides remain difficult to distinguish from the surrounding landscape, partly because they are not easily defined in terms of their topographic signature.

  18. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Sean J; Alvarez, Paul

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  19. National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?

    PubMed

    Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N

    2017-12-01

    To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.

  20. Volumetrically-Derived Global Navigation Satellite System Performance Assessment from the Earths Surface through the Terrestrial Service Volume and the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user from the Earth's surface through the Terrestrial Service Volume (TSV) to the edge of the Space Service Volume (SSV), when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative was recently expanded to compare nadir-facing and zenith-facing user hemispherical antenna coverage with omnidirectional antenna coverage at different distances of 8,000 km altitude and 36,000 km altitude. This report summarizes the performance using these antenna coverage techniques at distances ranging from 100 km altitude to 36,000 km to be all encompassing, as well as the volumetrically-derived system availability metrics.

  1. Exact example of backreaction of small scale inhomogeneities in cosmology

    NASA Astrophysics Data System (ADS)

    Green, Stephen; Wald, Robert

    2013-04-01

    We construct a one-parameter family of polarized vacuum Gowdy spacetimes on a torus. In the limit as the parameter N goes to infinity, the metric uniformly approaches a smooth ``background metric.'' However, spacetime derivatives of the metric do not approach a limit. As a result, we find that the background metric itself is not a solution of the vacuum Einstein equation. Rather, it is a solution of the Einstein equation with an ``effective stress-energy tensor,'' which is traceless and satisfies the weak energy condition. This is an explicit example of backreaction due to small scale inhomogeneities. We comment on the non-vacuum case, where we have proven in previous work that, provided the matter stress-energy tensor satisfies the weak energy condition, no additional backreaction is possible.

  2. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  3. Modeling and evaluating the performance of Brillouin distributed optical fiber sensors.

    PubMed

    Soto, Marcelo A; Thévenaz, Luc

    2013-12-16

    A thorough analysis of the key factors impacting on the performance of Brillouin distributed optical fiber sensors is presented. An analytical expression is derived to estimate the error on the determination of the Brillouin peak gain frequency, based for the first time on real experimental conditions. This expression is experimentally validated, and describes how this frequency uncertainty depends on measurement parameters, such as Brillouin gain linewidth, frequency scanning step and signal-to-noise ratio. Based on the model leading to this expression and considering the limitations imposed by nonlinear effects and pump depletion, a figure-of-merit is proposed to fairly compare the performance of Brillouin distributed sensing systems. This figure-of-merit offers to the research community and to potential users the possibility to evaluate with an objective metric the real performance gain resulting from any proposed configuration.

  4. Optical space-to-ground link availability assessment and diversity requirements

    NASA Technical Reports Server (NTRS)

    Chapman, William; Fitzmaurice, Michael

    1991-01-01

    The application of optical space-to-ground links (SGLs) for high speed data distribution from geosynchronous and low earth orbiting satellites (e.g., sensor data from the planned Earth Observing System), for lunar and Mars links, and for links from interplanetary probes has been a topic of considerable recent interest. These optical SGLs could conceivably represent the system's operational baseline, or could represent backup links in the event of a GEO relay terminal failure. In this paper the availability of optical SGLs for various system/orbit configurations is considered. Single CONUS sites are assessed for their probability of cloud free line of sight (PCFLOS), and cloud free field of view (PCFFOV). PCFLOS represents an availability metric for geosynchronous platforms, while PCFFOV is a relevant performance metric for non-geostationary platforms (e.g., low earth orbiting satellites). Additionally, the availability of multiple ground terminals utilized in a diversity configuration is considered. Availability statistics vs. the number of diversity sites are derived from climatological data bases for CONUS sites.

  5. Task-Driven Comparison of Topic Models.

    PubMed

    Alexander, Eric; Gleicher, Michael

    2016-01-01

    Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.

  6. Performance of biometric quality measures.

    PubMed

    Grother, Patrick; Tabassi, Elham

    2007-04-01

    We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.

  7. Questionable validity of the catheter-associated urinary tract infection metric used for value-based purchasing.

    PubMed

    Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K

    2015-10-01

    Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  8. Applying Sigma Metrics to Reduce Outliers.

    PubMed

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Quality Measures in Stroke

    PubMed Central

    Poisson, Sharon N.; Josephson, S. Andrew

    2011-01-01

    Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840

  10. A comparative study of the hovering efficiency of flapping and revolving wings.

    PubMed

    Zheng, L; Hedrick, T; Mittal, R

    2013-09-01

    Direct numerical simulations are used to explore the hovering performance and efficiency for hawkmoth-inspired flapping and revolving wings at Reynolds (Re) numbers varying from 50 to 4800. This range covers the gamut from small (fruit fly size) to large (hawkmoth size) flying insects and is also relevant to the design of micro- and nano-aerial vehicles. The flapping wing configuration chosen here corresponds to a hovering hawkmoth and the model is derived from high-speed videogrammetry of this insect. The revolving wing configuration also employs the wings of the hawkmoth but these are arranged in a dual-blade configuration typical of helicopters. Flow for both of these configurations is simulated over the range of Reynolds numbers of interest and the aerodynamic performance of the two compared. The comparison of these two seemingly different configurations raises issues regarding the appropriateness of various performance metrics and even characteristic scales; these are also addressed in the current study. Finally, the difference in the performance between the two is correlated with the flow physics of the two configurations. The study indicates that viscous forces dominate the aerodynamic power expenditure of the revolving wing to a degree not observed for the flapping wing. Consequently, the lift-to-power metric of the revolving wing declines rapidly with decreasing Reynolds numbers resulting in a hovering performance that is at least a factor of 2 lower than the flapping wing at Reynolds numbers less than about 100.

  11. Linking hydrodynamic complexity to delta smelt (Hypomesus transpacificus) distribution in the San Francisco Estuary, USA

    USGS Publications Warehouse

    Bever, Aaron J.; MacWilliams, Michael L.; Herbold, Bruce; Brown, Larry R.; Feyrer, Frederick V.

    2016-01-01

    Long-term fish sampling data from the San Francisco Estuary were combined with detailed three dimensional hydrodynamic modeling to investigate the relationship between historical fish catch and hydrodynamic complexity. Delta Smelt catch data at 45 stations from the Fall Midwater Trawl (FMWT) survey in the vicinity of Suisun Bay were used to develop a quantitative catch-based station index. This index was used to rank stations based on historical Delta Smelt catch. The correlations between historical Delta Smelt catch and 35 quantitative metrics of environmental complexity were evaluated at each station. Eight metrics of environmental conditions were derived from FMWT data and 27 metrics were derived from model predictions at each FMWT station. To relate the station index to conceptual models of Delta Smelt habitat, the metrics were used to predict the station ranking based on the quantified environmental conditions. Salinity, current speed, and turbidity metrics were used to predict the relative ranking of each station for Delta Smelt catch. Including a measure of the current speed at each station improved predictions of the historical ranking for Delta Smelt catch relative to similar predictions made using only salinity and turbidity. Current speed was also found to be a better predictor of historical Delta Smelt catch than water depth. The quantitative approach developed using the FMWT data was validated using the Delta Smelt catch data from the San Francisco Bay Study. Complexity metrics in Suisun Bay were-evaluated during 2010 and 2011. This analysis indicated that a key to historical Delta Smelt catch is the overlap of low salinity, low maximum velocity, and low Secchi depth regions. This overlap occurred in Suisun Bay during 2011, and may have contributed to higher Delta Smelt abundance in 2011 than in 2010 when the favorable ranges of the metrics did not overlap in Suisun Bay.

  12. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions

    NASA Astrophysics Data System (ADS)

    Gide, Milind S.; Karam, Lina J.

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.

  13. Quality metrics in high-dimensional data visualization: an overview and systematization.

    PubMed

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  14. Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?

    PubMed

    Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J

    2014-02-01

    Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.

  15. Measuring β-diversity with species abundance data.

    PubMed

    Barwell, Louise J; Isaac, Nick J B; Kunin, William E

    2015-07-01

    In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B  = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  16. An Exploratory Study of OEE Implementation in Indian Manufacturing Companies

    NASA Astrophysics Data System (ADS)

    Kumar, J.; Soni, V. K.

    2015-04-01

    Globally, the implementation of Overall equipment effectiveness (OEE) has proven to be highly effective in improving availability, performance rate and quality rate while reducing unscheduled breakdown and wastage that stems from the equipment. This paper investigates the present status and future scope of OEE metrics in Indian manufacturing companies through an extensive survey. In this survey, opinions of Production and Maintenance Managers have been analyzed statistically to explore the relationship between factors, perspective of OEE and potential use of OEE metrics. Although the sample has been divers in terms of product, process type, size, and geographic location of the companies, they are enforced to implement improvement techniques such as OEE metrics to improve performance. The findings reveal that OEE metrics has huge potential and scope to improve performance. Responses indicate that Indian companies are aware of OEE but they are not utilizing full potential of OEE metrics.

  17. Cohen's Kappa and classification table metrics 2.0: An ArcView 3.x extension for accuracy assessment of spatially explicit models

    Treesearch

    Jeff Jenness; J. Judson Wynne

    2005-01-01

    In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...

  18. Preliminary comparison of landscape pattern-normalized difference vegetation index (NDVI) relationships to central plains stream conditions

    USGS Publications Warehouse

    Griffith, J.A.; Martinko, E.A.; Whistler, J.L.; Price, K.P.

    2002-01-01

    We explored relationships of water quality parameters with landscape pattern metrics (LPMs), land use-land cover (LULC) proportions, and the advanced very high resolution radiometer (AVHRR) normalized difference vegetation index (NDVI) or NDVI-derived metrics. Stream sites (271) in Nebraska, Kansas, and Missouri were sampled for water quality parameters, the index of biotic integrity, and a habitat index in either 1994 or 1995. Although a combination of LPMs (interspersion and juxtaposition index, patch density, and percent forest) within Ozark Highlands watersheds explained >60% of the variation in levels of nitrite-nitrate nitrogen and conductivity, in most cases the LPMs were not significantly correlated with the stream data. Several problems using landscape pattern metrics were noted: small watersheds having only one or two patches, collinearity with LULC data, and counterintuitive or inconsistent results that resulted from basic differences in land use-land cover patterns among ecoregions or from other factors determining water quality. The amount of variation explained in water quality parameters using multiple regression models that combined LULC and LPMs was generally lower than that from NDVI or vegetation phenology metrics derived from time-series NDVI data. A comparison of LPMs and NDVI indicated that NDVI had greater promise for monitoring landscapes for stream conditions within the study area.

  19. Preliminary comparison of landscape pattern-normalized difference vegetation index (NDVI) relationships to Central Plains stream conditions.

    PubMed

    Griffith, Jerry A; Martinko, Edward A; Whistler, Jerry L; Price, Kevin P

    2002-01-01

    We explored relationships of water quality parameters with landscape pattern metrics (LPMs), land use-land cover (LULC) proportions, and the advanced very high resolution radiometer (AVHRR) normalized difference vegetation index (NDVI) or NDVI-derived metrics. Stream sites (271) in Nebraska, Kansas, and Missouri were sampled for water quality parameters, the index of biotic integrity, and a habitat index in either 1994 or 1995. Although a combination of LPMs (interspersion and juxtaposition index, patch density, and percent forest) within Ozark Highlands watersheds explained >60% of the variation in levels of nitrite-nitrate nitrogen and conductivity, in most cases the LPMs were not significantly correlated with the stream data. Several problems using landscape pattern metrics were noted: small watersheds having only one or two patches, collinearity with LULC data, and counterintuitive or inconsistent results that resulted from basic differences in land use-land cover patterns among ecoregions or from other factors determining water quality. The amount of variation explained in water quality parameters using multiple regression models that combined LULC and LPMs was generally lower than that from NDVI or vegetation phenology metrics derived from time-series NDVI data. A comparison of LPMs and NDVI indicated that NDVI had greater promise for monitoring landscapes for stream conditions within the study area.

  20. A neural net-based approach to software metrics

    NASA Technical Reports Server (NTRS)

    Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.

    1992-01-01

    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.

  1. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1991-01-01

    NASA's principal metrication accomplishments for FY 1990 were establishment of metrication policy for major programs, development of an implementing instruction for overall metric policy and initiation of metrication planning for the major program offices. In FY 1991, development of an overall NASA plan and individual program office plans will be completed, requirement assessments will be performed for all support areas, and detailed assessment and transition planning will be undertaken at the institutional level. Metric feasibility decisions on a number of major programs are expected over the next 18 months.

  2. Liver fibrosis: in vivo evaluation using intravoxel incoherent motion-derived histogram metrics with histopathologic findings at 3.0 T.

    PubMed

    Hu, Fubi; Yang, Ru; Huang, Zixing; Wang, Min; Zhang, Hanmei; Yan, Xu; Song, Bin

    2017-12-01

    To retrospectively determine the feasibility of intravoxel incoherent motion (IVIM) imaging based on histogram analysis for the staging of liver fibrosis (LF) using histopathologic findings as the reference standard. 56 consecutive patients (14 men, 42 women; age range, 15-76, years) with chronic liver diseases (CLDs) were studied using IVIM-DWI with 9 b-values (0, 25, 50, 75, 100, 150, 200, 500, 800 s/mm 2 ) at 3.0 T. Fibrosis stage was evaluated using the METAVIR scoring system. Histogram metrics including mean, standard deviation (Std), skewness, kurtosis, minimum (Min), maximum (Max), range, interquartile (Iq) range, and percentiles (10, 25, 50, 75, 90th) were extracted from apparent diffusion coefficient (ADC), true diffusion coefficient (D), pseudo-diffusion coefficient (D*), and perfusion fraction (f) maps. All histogram metrics among different fibrosis groups were compared using one-way analysis of variance or nonparametric Kruskal-Wallis test. For significant parameters, receivers operating characteristic curve (ROC) analyses were further performed for the staging of LF. Based on their METAVIR stage, the 56 patients were reclassified into three groups as follows: F0-1 group (n = 25), F2-3 group (n = 21), and F4 group (n = 10). The mean, Iq range, percentiles (50, 75, and 90th) of D* maps between the groups were significant differences (all P < 0.05). Area under the ROC curve (AUC) of the mean, Iq range, 50, 75, and 90th percentile of D* maps for identifying significant LF (≥F2 stage) was 0.901, 0.859, 0.876, 0.943, and 0.886 (all P < 0.0001), respectively; for diagnosing severe fibrosis or cirrhosis (F4), AUC was 0.917, 0.922, 0.943, 0.985, and 0.939 (all P < 0.0001), respectively. The histogram metrics of ADC, D, and f maps demonstrated no significant difference among the groups (all P > 0.05). Histogram analysis of D* map derived from IVIM can be used to stage liver fibrosis in patients with CLDs and provide more quantitative information beyond the mean value.

  3. Matter field Kähler metric in heterotic string theory from localisation

    NASA Astrophysics Data System (ADS)

    Blesneag, Ştefan; Buchbinder, Evgeny I.; Constantin, Andrei; Lukas, Andre; Palti, Eran

    2018-04-01

    We propose an analytic method to calculate the matter field Kähler metric in heterotic compactifications on smooth Calabi-Yau three-folds with Abelian internal gauge fields. The matter field Kähler metric determines the normalisations of the N = 1 chiral superfields, which enter the computation of the physical Yukawa couplings. We first derive the general formula for this Kähler metric by a dimensional reduction of the relevant supergravity theory and find that its T-moduli dependence can be determined in general. It turns out that, due to large internal gauge flux, the remaining integrals localise around certain points on the compactification manifold and can, hence, be calculated approximately without precise knowledge of the Ricci-flat Calabi-Yau metric. In a final step, we show how this local result can be expressed in terms of the global moduli of the Calabi-Yau manifold. The method is illustrated for the family of Calabi-Yau hypersurfaces embedded in P^1× P^3 and we obtain an explicit result for the matter field Kähler metric in this case.

  4. A Metric on Phylogenetic Tree Shapes.

    PubMed

    Colijn, C; Plazzotta, G

    2018-01-01

    The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees' branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  5. Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.

    2014-06-01

    We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  6. Big Data Tools as Applied to ATLAS Event Data

    NASA Astrophysics Data System (ADS)

    Vukotic, I.; Gardner, R. W.; Bryant, L. A.

    2017-10-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and tools like Spark, Jupyter, R, SciPy, Caffe, TensorFlow, etc. Machine learning challenges such as the Higgs Boson Machine Learning Challenge, the Tracking challenge, Event viewers (VP1, ATLANTIS, ATLASrift), and still to be developed educational and outreach tools would be able to access the data through a simple REST API. In this preliminary investigation we focus on derived xAOD data sets. These are much smaller than the primary xAODs having containers, variables, and events of interest to a particular analysis. Being encouraged with the performance of Elasticsearch for the ADC analytics platform, we developed an algorithm for indexing derived xAOD event data. We have made an appropriate document mapping and have imported a full set of standard model W/Z datasets. We compare the disk space efficiency of this approach to that of standard ROOT files, the performance in simple cut flow type of data analysis, and will present preliminary results on its scaling characteristics with different numbers of clients, query complexity, and size of the data retrieved.

  7. Value of Frequency Domain Resting-State Functional Magnetic Resonance Imaging Metrics Amplitude of Low-Frequency Fluctuation and Fractional Amplitude of Low-Frequency Fluctuation in the Assessment of Brain Tumor-Induced Neurovascular Uncoupling.

    PubMed

    Agarwal, Shruti; Lu, Hanzhang; Pillai, Jay J

    2017-08-01

    The aim of this study was to explore whether the phenomenon of brain tumor-related neurovascular uncoupling (NVU) in resting-state blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) (rsfMRI) may also affect the resting-state fMRI (rsfMRI) frequency domain metrics the amplitude of low-frequency fluctuation (ALFF) and fractional ALFF (fALFF). Twelve de novo brain tumor patients, who underwent clinical fMRI examinations, including task-based fMRI (tbfMRI) and rsfMRI, were included in this Institutional Review Board-approved study. Each patient displayed decreased/absent tbfMRI activation in the primary ipsilesional (IL) sensorimotor cortex in the absence of a corresponding motor deficit or suboptimal task performance, consistent with NVU. Z-score maps for the motor tasks were obtained from general linear model analysis (reflecting motor activation vs. rest). Seed-based correlation analysis (SCA) maps of sensorimotor network, ALFF, and fALFF were calculated from rsfMRI data. Precentral and postcentral gyri in contralesional (CL) and IL hemispheres were parcellated using an automated anatomical labeling template for each patient. Region of interest (ROI) analysis was performed on four maps: tbfMRI, SCA, ALFF, and fALFF. Voxel values in the CL and IL ROIs of each map were divided by the corresponding global mean of ALFF and fALFF in the cortical brain tissue. Group analysis revealed significantly decreased IL ALFF (p = 0.02) and fALFF (p = 0.03) metrics compared with CL ROIs, consistent with similar findings of significantly decreased IL BOLD signal for tbfMRI (p = 0.0005) and SCA maps (p = 0.0004). The frequency domain metrics ALFF and fALFF may be markers of lesion-induced NVU in rsfMRI similar to previously reported alterations in tbfMRI activation and SCA-derived resting-state functional connectivity maps.

  8. Development of a perceptually calibrated objective metric of noise

    NASA Astrophysics Data System (ADS)

    Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey

    2011-01-01

    A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.

  9. Developing a confidence metric for the Landsat land surface temperature product

    NASA Astrophysics Data System (ADS)

    Laraby, Kelly G.; Schott, John R.; Raqueno, Nina

    2016-05-01

    Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.

  10. Understanding Chemistry-Specific Fuel Differences at a Constant RON in a Boosted SI Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szybist, James P.; Splitter, Derek A.

    The goal of the US Department of Energy Co-Optimization of Fuels and Engines (Co-Optima) initiative is to accelerate the development of advanced fuels and engines for higher efficiency and lower emissions. A guiding principle of this initiative is the central fuel properties hypothesis (CFPH), which states that fuel properties provide an indication of a fuel’s performance, regardless of its chemical composition. This is an important consideration for Co-Optima because many of the fuels under consideration are from bio-derived sources with chemical compositions that are unconventional relative to petroleum-derived gasoline or ethanol. In this study, we investigated a total of sevenmore » fuels in a spark ignition engine under boosted operating conditions to determine whether knock propensity is predicted by fuel antiknock metrics: antiknock index (AKI), research octane number (RON), and octane index (OI). Six of these fuels have a constant RON value but otherwise represent a wide range of fuel properties and chemistry. Consistent with previous studies, we found that OI was a much better predictor of knock propensity that either AKI or RON. However, we also found that there were significant fuel-specific deviations from the OI predictions. Combustion analysis provided insight that fuel kinetic complexities, including the presence of pre-spark heat release, likely limits the ability of standardized tests and metrics to accurately predict knocking tendency at all operating conditions. While limitations of OI were revealed in this study, we found that fuels with unconventional chemistry, in particular esters and ethers, behaved in accordance with CFPH as well as petroleum-derived fuels.« less

  11. Understanding Chemistry-Specific Fuel Differences at a Constant RON in a Boosted SI Engine

    DOE PAGES

    Szybist, James P.; Splitter, Derek A.

    2018-01-02

    The goal of the US Department of Energy Co-Optimization of Fuels and Engines (Co-Optima) initiative is to accelerate the development of advanced fuels and engines for higher efficiency and lower emissions. A guiding principle of this initiative is the central fuel properties hypothesis (CFPH), which states that fuel properties provide an indication of a fuel’s performance, regardless of its chemical composition. This is an important consideration for Co-Optima because many of the fuels under consideration are from bio-derived sources with chemical compositions that are unconventional relative to petroleum-derived gasoline or ethanol. In this study, we investigated a total of sevenmore » fuels in a spark ignition engine under boosted operating conditions to determine whether knock propensity is predicted by fuel antiknock metrics: antiknock index (AKI), research octane number (RON), and octane index (OI). Six of these fuels have a constant RON value but otherwise represent a wide range of fuel properties and chemistry. Consistent with previous studies, we found that OI was a much better predictor of knock propensity that either AKI or RON. However, we also found that there were significant fuel-specific deviations from the OI predictions. Combustion analysis provided insight that fuel kinetic complexities, including the presence of pre-spark heat release, likely limits the ability of standardized tests and metrics to accurately predict knocking tendency at all operating conditions. While limitations of OI were revealed in this study, we found that fuels with unconventional chemistry, in particular esters and ethers, behaved in accordance with CFPH as well as petroleum-derived fuels.« less

  12. Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.

    PubMed

    Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony

    2017-12-01

    Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Riemannian geometric approach to human arm dynamics, movement optimization, and invariance

    NASA Astrophysics Data System (ADS)

    Biess, Armin; Flash, Tamar; Liebermann, Dario G.

    2011-03-01

    We present a generally covariant formulation of human arm dynamics and optimization principles in Riemannian configuration space. We extend the one-parameter family of mean-squared-derivative (MSD) cost functionals from Euclidean to Riemannian space, and we show that they are mathematically identical to the corresponding dynamic costs when formulated in a Riemannian space equipped with the kinetic energy metric. In particular, we derive the equivalence of the minimum-jerk and minimum-torque change models in this metric space. Solutions of the one-parameter family of MSD variational problems in Riemannian space are given by (reparametrized) geodesic paths, which correspond to movements with least muscular effort. Finally, movement invariants are derived from symmetries of the Riemannian manifold. We argue that the geometrical structure imposed on the arm’s configuration space may provide insights into the emerging properties of the movements generated by the motor system.

  14. Temporal evolution of crack propagation propensity in snow in relation to slab and weak layer properties

    NASA Astrophysics Data System (ADS)

    Schweizer, Jürg; Reuter, Benjamin; van Herwijnen, Alec; Richter, Bettina; Gaume, Johan

    2016-11-01

    If a weak snow layer below a cohesive slab is present in the snow cover, unstable snow conditions can prevail for days or even weeks. We monitored the temporal evolution of a weak layer of faceted crystals as well as the overlaying slab layers at the location of an automatic weather station in the Steintälli field site above Davos (Eastern Swiss Alps). We focussed on the crack propagation propensity and performed propagation saw tests (PSTs) on 7 sampling days during a 2-month period from early January to early March 2015. Based on video images taken during the tests we determined the mechanical properties of the slab and the weak layer and compared them to the results derived from concurrently performed measurements of penetration resistance using the snow micro-penetrometer (SMP). The critical cut length, observed in PSTs, increased overall during the measurement period. The increase was not steady and the lowest values of critical cut length were observed around the middle of the measurement period. The relevant mechanical properties, the slab effective elastic modulus and the weak layer specific fracture, overall increased as well. However, the changes with time differed, suggesting that the critical cut length cannot be assessed by simply monitoring a single mechanical property such as slab load, slab modulus or weak layer specific fracture energy. Instead, crack propagation propensity is the result of a complex interplay between the mechanical properties of the slab and the weak layer. We then compared our field observations to newly developed metrics of snow instability related to either failure initiation or crack propagation propensity. The metrics were either derived from the SMP signal or calculated from simulated snow stratigraphy (SNOWPACK). They partially reproduced the observed temporal evolution of critical cut length and instability test scores. Whereas our unique dataset of quantitative measures of snow instability provides new insights into the complex slab-weak layer interaction, it also showed some deficiencies of the modelled metrics of instability - calling for an improved representation of the mechanical properties.

  15. Design parameters for toroidal and bobbin magnetics. [conversion from English to metric units

    NASA Technical Reports Server (NTRS)

    Mclyman, W. T.

    1974-01-01

    The adoption by NASA of the metric system for dimensioning to replace long-used English units imposes a requirement on the U.S. transformer designer to convert from the familiar units to the less familiar metric equivalents. Material is presented to assist in that transition in the field of transformer design and fabrication. The conversion data makes it possible for the designer to obtain a fast and close approximation of significant parameters such as size, weight, and temperature rise. Nomographs are included to provide a close approximation for breadboarding purposes. For greater convenience, derivations of some of the parameters are also presented.

  16. Restaurant Energy Use Benchmarking Guideline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  17. Improving Department of Defense Global Distribution Performance Through Network Analysis

    DTIC Science & Technology

    2016-06-01

    network performance increase. 14. SUBJECT TERMS supply chain metrics, distribution networks, requisition shipping time, strategic distribution database...peace and war” (p. 4). USTRANSCOM Metrics and Analysis Branch defines, develops, tracks, and maintains outcomes- based supply chain metrics to...2014a, p. 8). The Joint Staff defines a TDD standard as the maximum number of days the supply chain can take to deliver requisitioned materiel

  18. Tide or Tsunami? The Impact of Metrics on Scholarly Research

    ERIC Educational Resources Information Center

    Bonnell, Andrew G.

    2016-01-01

    Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…

  19. On Railroad Tank Car Puncture Performance: Part I - Considering Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the first in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perform...

  20. Tracking occupational hearing loss across global industries: A comparative analysis of metrics

    PubMed Central

    Rabinowitz, Peter M.; Galusha, Deron; McTague, Michael F.; Slade, Martin D.; Wesdock, James C.; Dixon-Ernst, Christine

    2013-01-01

    Occupational hearing loss is one of the most prevalent occupational conditions; yet, there is no acknowledged international metric to allow comparisons of risk between different industries and regions. In order to make recommendations for an international standard of occupational hearing loss, members of an international industry group (the International Aluminium Association) submitted details of different hearing loss metrics currently in use by members. We compared the performance of these metrics using an audiometric data set for over 6000 individuals working in 10 locations of one member company. We calculated rates for each metric at each location from 2002 to 2006. For comparison, we calculated the difference of observed–expected (for age) binaural high frequency hearing loss (in dB/year) for each location over the same time period. We performed linear regression to determine the correlation between each metric and the observed–expected rate of hearing loss. The different metrics produced discrepant results, with annual rates ranging from 0.0% for a less-sensitive metric to more than 10% for a highly sensitive metric. At least two metrics, a 10 dB age-corrected threshold shift from baseline and a 15 dB nonage-corrected shift metric, correlated well with the difference of observed–expected high-frequency hearing loss. This study suggests that it is feasible to develop an international standard for tracking occupational hearing loss in industrial working populations. PMID:22387709

  1. Do Your Students Measure Up Metrically?

    ERIC Educational Resources Information Center

    Taylor, P. Mark; Simms, Ken; Kim, Ok-Kyeong; Reys, Robert E.

    2001-01-01

    Examines released metric items from the Third International Mathematics and Science Study (TIMSS) and the 3rd and 4th grade results. Recommends refocusing instruction on the metric system to improve student performance in measurement. (KHR)

  2. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  3. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  4. Validating LiDAR Derived Estimates of Canopy Height, Structure and Fractional Cover in Riparian Areas: A Comparison of Leaf-on and Leaf-off LiDAR Data

    NASA Astrophysics Data System (ADS)

    Wasser, L. A.; Chasmer, L. E.; Taylor, A.; Day, R.

    2010-12-01

    Characterization of riparian buffers is integral to understanding the landscape scale impacts of disturbance on wildlife and aquatic ecosystems. Riparian buffers may be characterized using in situ plot sampling or via high resolution remote sensing. Field measurements are time-consuming and may not cover a broad range of ecosystem types. Further, spectral remote sensing methods introduce a compromise between spatial resolution (grain) and area extent. Airborne LiDAR can be used to continuously map and characterize riparian vegetation structure and composition due to the three-dimensional reflectance of laser pulses within and below the canopy, understory and at the ground surface. The distance between reflections (or ‘returns’) allows for detection of narrow buffer corridors at the landscape scale. There is a need to compare leaf-off and leaf-on surveyed LiDAR data with in situ measurements to assess accuracy in landscape scale analysis. These comparisons are particularly important considering increased availability of leaf-off surveyed LiDAR datasets. And given this increased availability, differences between leaf-on and leaf-off derived LiDAR metrics are largely unknown for riparian vegetation of varying composition and structure. This study compares the effectiveness of leaf-on and leaf-off LiDAR in characterizing riparian buffers of varying structure and composition as compared to field measurements. Field measurements were used to validate LiDAR derived metrics. Vegetation height, canopy cover, density and overstory and understory species composition were recorded in 80 random plots of varying vegetation type, density and structure within a Pennsylvania watershed (-77.841, 40.818). Plot data were compared with LiDAR data collected during leaf on and leaf off conditions to determine 1) accuracy of LiDAR derived metrics compared to field measures and 2) differences between leaf-on and leaf-off LiDAR metrics. Results illustrate that differences exist between metrics derived from leaf on and leaf-off surveyed LiDAR. There is greater variability between the two datasets within taller deciduous and mixed (conifer and deciduous) vegetation compared to shorter deciduous and mixed vegetation. Differences decrease as stand density increases for both mixed and deciduous forests. LiDAR derived canopy height is more sensitive to understory vegetation as stand density decreases making measurement of understory vegetation in the field important in the validation process. Finally, while leaf-on LiDAR is often preferred for vegetation analysis, results suggest that leaf-off LiDAR may be sufficient to categorize vegetation into height classes to be used for landscape scale habitat models.

  5. Context and meter enhance long-range planning in music performance

    PubMed Central

    Mathias, Brian; Pfordresher, Peter Q.; Palmer, Caroline

    2015-01-01

    Neural responses demonstrate evidence of resonance, or oscillation, during the production of periodic auditory events. Music contains periodic auditory events that give rise to a sense of beat, which in turn generates a sense of meter on the basis of multiple periodicities. Metrical hierarchies may aid memory for music by facilitating similarity-based associations among sequence events at different periodic distances that unfold in longer contexts. A fundamental question is how metrical associations arising from a musical context influence memory during music performance. Longer contexts may facilitate metrical associations at higher hierarchical levels more than shorter contexts, a prediction of the range model, a formal model of planning processes in music performance (Palmer and Pfordresher, 2003; Pfordresher et al., 2007). Serial ordering errors, in which intended sequence events are produced in incorrect sequence positions, were measured as skilled pianists performed musical pieces that contained excerpts embedded in long or short musical contexts. Pitch errors arose from metrically similar positions and further sequential distances more often when the excerpt was embedded in long contexts compared to short contexts. Musicians’ keystroke intensities and error rates also revealed influences of metrical hierarchies, which differed for performances in long and short contexts. The range model accounted for contextual effects and provided better fits to empirical findings when metrical associations between sequence events were included. Longer sequence contexts may facilitate planning during sequence production by increasing conceptual similarity between hierarchically associated events. These findings are consistent with the notion that neural oscillations at multiple periodicities may strengthen metrical associations across sequence events during planning. PMID:25628550

  6. Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.; hide

    2011-01-01

    Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.

  7. Molecular cooperativity and compatibility via full atomistic simulation

    NASA Astrophysics Data System (ADS)

    Kwan Yang, Kenny

    Civil engineering has customarily focused on problems from a large-scale perspective, encompassing structures such as bridges, dams, and infrastructure. However, present day challenges in conjunction with advances in nanotechnology have forced a re-focusing of expertise. The use of atomistic and molecular approaches to study material systems opens the door to significantly improve material properties. The understanding that material systems themselves are structures, where their assemblies can dictate design capacities and failure modes makes this problem well suited for those who possess expertise in structural engineering. At the same time, a focus has been given to the performance metrics of materials at the nanoscale, including strength, toughness, and transport properties (e.g., electrical, thermal). Little effort has been made in the systematic characterization of system compatibility -- e.g., how to make disparate material building blocks behave in unison. This research attempts to develop bottom-up molecular scale understanding of material behavior, with the global objective being the application of this understanding into material design/characterization at an ultimate functional scale. In particular, it addresses the subject of cooperativity at the nano-scale. This research aims to define the conditions which dictate when discrete molecules may behave as a single, functional unit, thereby facilitating homogenization and up-scaling approaches, setting bounds for assembly, and providing a transferable assessment tool across molecular systems. Following a macro-scale pattern where the compatibility of deformation plays a vital role in the structural design, novel geometrical cooperativity metrics based on the gyration tensor are derived with the intention to define nano-cooperativity in a generalized way. The metrics objectively describe the general size, shape and orientation of the structure. To validate the derived measures, a pair of ideal macromolecules, where the density of cross-linking dictates cooperativity, is used to gauge the effectiveness of the triumvirate of gyration metrics. The metrics are shown to identify the critical number of cross-links that allowed the pair to deform together. The next step involves looking at the cooperativity features on a real system. We investigate a representative collagen molecule (i.e., tropocollagen), where single point mutations are known to produce kinks that create local unfolding. The results indicate that the metrics are effective, serving as a validation of the cooperativity metrics in a palpable material system. Finally a preliminary study on a carbon nanotube and collagen composite is proposed with a long-term objective of understanding the interactions between them as a means to corroborate experimental efforts in reproducing a d-banded collagen fiber. The emerging needs for more robust and resilient structures, as well as sustainable are serving as motivation to think beyond the traditional design methods. The characterization of cooperativity is thus key in materiomics, an emerging field that focuses on developing a "nano-to-macro" synergistic platform, which provides the necessary tools and procedures to validate future structural models and other critical behavior in a holistic manner, from atoms to application.

  8. A direct-gradient multivariate index of biotic condition

    USGS Publications Warehouse

    Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.

    2012-01-01

    Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.

  9. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Pesticide Factsheets

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  10. Greenroads : a sustainability performance metric for roadway design and construction.

    DOT National Transportation Integrated Search

    2009-11-01

    Greenroads is a performance metric for quantifying sustainable practices associated with roadway design and construction. Sustainability is defined as having seven key components: ecology, equity, economy, extent, expectations, experience and exposur...

  11. Performance metrics used by freight transport providers.

    DOT National Transportation Integrated Search

    2008-09-30

    The newly-established National Cooperative Freight Research Program (NCFRP) has allocated $300,000 in funding to a project entitled Performance Metrics for Freight Transportation (NCFRP 03). The project is scheduled for completion in September ...

  12. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  13. Comparison of Ordinal and Nominal Classification Trees to Predict Ordinal Expert-Based Occupational Exposure Estimates in a Case–Control Study

    PubMed Central

    Wheeler, David C.; Archer, Kellie J.; Burstyn, Igor; Yu, Kai; Stewart, Patricia A.; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla; Silverman, Debra T.; Friesen, Melissa C.

    2015-01-01

    Objectives: To evaluate occupational exposures in case–control studies, exposure assessors typically review each job individually to assign exposure estimates. This process lacks transparency and does not provide a mechanism for recreating the decision rules in other studies. In our previous work, nominal (unordered categorical) classification trees (CTs) generally successfully predicted expert-assessed ordinal exposure estimates (i.e. none, low, medium, high) derived from occupational questionnaire responses, but room for improvement remained. Our objective was to determine if using recently developed ordinal CTs would improve the performance of nominal trees in predicting ordinal occupational diesel exhaust exposure estimates in a case–control study. Methods: We used one nominal and four ordinal CT methods to predict expert-assessed probability, intensity, and frequency estimates of occupational diesel exhaust exposure (each categorized as none, low, medium, or high) derived from questionnaire responses for the 14983 jobs in the New England Bladder Cancer Study. To replicate the common use of a single tree, we applied each method to a single sample of 70% of the jobs, using 15% to test and 15% to validate each method. To characterize variability in performance, we conducted a resampling analysis that repeated the sample draws 100 times. We evaluated agreement between the tree predictions and expert estimates using Somers’ d, which measures differences in terms of ordinal association between predicted and observed scores and can be interpreted similarly to a correlation coefficient. Results: From the resampling analysis, compared with the nominal tree, an ordinal CT method that used a quadratic misclassification function and controlled tree size based on total misclassification cost had a slightly better predictive performance that was statistically significant for the frequency metric (Somers’ d: nominal tree = 0.61; ordinal tree = 0.63) and similar performance for the probability (nominal = 0.65; ordinal = 0.66) and intensity (nominal = 0.65; ordinal = 0.65) metrics. The best ordinal CT predicted fewer cases of large disagreement with the expert assessments (i.e. no exposure predicted for a job with high exposure and vice versa) compared with the nominal tree across all of the exposure metrics. For example, the percent of jobs with expert-assigned high intensity of exposure that the model predicted as no exposure was 29% for the nominal tree and 22% for the best ordinal tree. Conclusions: The overall agreements were similar across CT models; however, the use of ordinal models reduced the magnitude of the discrepancy when disagreements occurred. As the best performing model can vary by situation, researchers should consider evaluating multiple CT methods to maximize the predictive performance within their data. PMID:25433003

  14. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  15. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  16. On Rosen's theory of gravity and cosmology

    NASA Technical Reports Server (NTRS)

    Barnes, R. C.

    1980-01-01

    Formal similarities between general relativity and Rosen's bimetric theory of gravity were used to analyze various bimetric cosmologies. The following results were found: (1) physically plausible model universes which have a flat static background metric, have a Robertson-Walker fundamental metric, and which allow co-moving coordinates do not exist in bimetric cosmology. (2) it is difficult to use the Robertson-Walker metric for both the background metric (gamma mu nu) and the fundamental metric tensor of Riemannian geometry( g mu nu) and require that g mu nu and gamma mu nu have different time dependences. (3) A consistency relation for using co-moving coordinates in bimetric cosmology was derived. (4) Certain spatially flat bimetric cosmologies of Babala were tested for the presence of particle horizons. (5) An analytic solution for Rosen's k = +1 model was found. (6) Rosen's singularity free k = +1 model arises from what appears to be an arbitary choice for the time dependent part of gamma mu nu.

  17. Measuring strategic success.

    PubMed

    Gish, Ryan

    2002-08-01

    Strategic triggers and metrics help healthcare providers achieve financial success. Metrics help assess progress toward long-term goals. Triggers signal market changes requiring a change in strategy. All metrics may not move in concert. Organizations need to identify indicators, monitor performance.

  18. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozza, V.; Postiglione, A., E-mail: valboz@sa.infn.it, E-mail: postiglione@fis.uniroma3.it

    The metric outside an isolated object made up of ordinary matter is bound to be the classical Schwarzschild vacuum solution of General Relativity. Nevertheless, some solutions are known (e.g. Morris-Thorne wormholes) that do not match Schwarzschild asymptotically. On a phenomenological point of view, gravitational lensing in metrics falling as 1/r{sup q} has recently attracted great interest. In this work, we explore the conditions on the source matter for constructing static spherically symmetric metrics exhibiting an arbitrary power-law as Newtonian limit. For such space-times we also derive the expressions of gravitational redshift and force on probe masses, which, together with lightmore » deflection, can be used in astrophysical searches of non-Schwarzschild objects made up of exotic matter. Interestingly, we prove that even a minimally coupled scalar field with a power-law potential can support non-Schwarzschild metrics with arbitrary asymptotic behaviour.« less

  20. Linking Hydrologic Alteration to Biological Impairment in Urbanizing Streams of the Puget Lowland, Washington, USA1

    PubMed Central

    DeGasperi, Curtis L; Berge, Hans B; Whiting, Kelly R; Burkey, Jeff J; Cassin, Jan L; Fuerstenberg, Robert R

    2009-01-01

    We used a retrospective approach to identify hydrologic metrics with the greatest potential for ecological relevance for use as resource management tools (i.e., hydrologic indicators) in rapidly urbanizing basins of the Puget Lowland. We proposed four criteria for identifying useful hydrologic indicators: (1) sensitive to urbanization consistent with expected hydrologic response, (2) demonstrate statistically significant trends in urbanizing basins (and not in undeveloped basins), (3) be correlated with measures of biological response to urbanization, and (4) be relatively insensitive to potentially confounding variables like basin area. Data utilized in the analysis included gauged flow and benthic macroinvertebrate data collected at 16 locations in 11 King County stream basins. Fifteen hydrologic metrics were calculated from daily average flow data and the Pacific Northwest Benthic Index of Biological Integrity (B-IBI) was used to represent the gradient of response of stream macroinvertebrates to urbanization. Urbanization was represented by percent Total Impervious Area (%TIA) and percent urban land cover (%Urban). We found eight hydrologic metrics that were significantly correlated with B-IBI scores (Low Pulse Count and Duration; High Pulse Count, Duration, and Range; Flow Reversals, TQmean, and R-B Index). Although there appeared to be a great deal of redundancy among these metrics with respect to their response to urbanization, only two of the metrics tested – High Pulse Count and High Pulse Range – best met all four criteria we established for selecting hydrologic indicators. The increase in these high pulse metrics with respect to urbanization is the result of an increase in winter high pulses and the occurrence of high pulse events during summer (increasing the frequency and range of high pulses), when practically none would have occurred prior to development. We performed an initial evaluation of the usefulness of our hydrologic indicators by calculating and comparing hydrologic metrics derived from continuous hydrologic simulations of selected basin management alternatives for Miller Creek, one of the most highly urbanized basins used in our study. We found that the preferred basin management alternative appeared to be effective in restoring some flow metrics close to simulated fully forested conditions (e.g., TQmean), but less effective in restoring other metrics such as High Pulse Count and Range. If future research continues to support our hypothesis that the flow regime, particularly High Pulse Count and Range, is an important control of biotic integrity in Puget Lowland streams, it would have significant implications for stormwater management. PMID:22457566

  1. Foul tip impact attenuation of baseball catcher masks using head impact metrics

    PubMed Central

    White, Terrance R.; Cutcliffe, Hattie C.; Shridharani, Jay K.; Wood, Garrett W.; Bass, Cameron R.

    2018-01-01

    Currently, no scientific consensus exists on the relative safety of catcher mask styles and materials. Due to differences in mass and material properties, the style and material of a catcher mask influences the impact metrics observed during simulated foul ball impacts. The catcher surrogate was a Hybrid III head and neck equipped with a six degree of freedom sensor package to obtain linear accelerations and angular rates. Four mask styles were impacted using an air cannon for six 30 m/s and six 35 m/s impacts to the nasion. To quantify impact severity, the metrics peak linear acceleration, peak angular acceleration, Head Injury Criterion, Head Impact Power, and Gadd Severity Index were used. An Analysis of Covariance and a Tukey’s HSD Test were conducted to compare the least squares mean between masks for each head injury metric. For each injury metric a P-Value less than 0.05 was found indicating a significant difference in mask performance. Tukey’s HSD test found for each metric, the traditional style titanium mask fell in the lowest performance category while the hockey style mask was in the highest performance category. Limitations of this study prevented a direct correlation from mask testing performance to mild traumatic brain injury. PMID:29856814

  2. Proposed Performance-Based Metrics for the Future Funding of Graduate Medical Education: Starting the Conversation.

    PubMed

    Caverzagie, Kelly J; Lane, Susan W; Sharma, Niraj; Donnelly, John; Jaeger, Jeffrey R; Laird-Fick, Heather; Moriarty, John P; Moyer, Darilyn V; Wallach, Sara L; Wardrop, Richard M; Steinmann, Alwin F

    2017-12-12

    Graduate medical education (GME) in the United States is financed by contributions from both federal and state entities that total over $15 billion annually. Within institutions, these funds are distributed with limited transparency to achieve ill-defined outcomes. To address this, the Institute of Medicine convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society's current and future needs. The resulting report provided several recommendations regarding the oversight and mechanisms of GME funding, including implementation of performance-based GME payments, but did not provide specific details about the content and development of metrics for these payments. To initiate a national conversation about performance-based GME funding, the authors asked: What should GME be held accountable for in exchange for public funding? In answer to this question, the authors propose 17 potential performance-based metrics for GME funding that could inform future funding decisions. Eight of the metrics are described as exemplars to add context and to help readers obtain a deeper understanding of the inherent complexities of performance-based GME funding. The authors also describe considerations and precautions for metric implementation.

  3. The importance of metrics for evaluating scientific performance

    NASA Astrophysics Data System (ADS)

    Miyakawa, Tsuyoshi

    Evaluation of scientific performance is a major factor that determines the behavior of both individual researchers and the academic institutes to which they belong. Because the number of researchers heavily outweighs the number of available research posts, and the competitive funding accounts for an ever-increasing proportion of research budget, some objective indicators of research performance have gained recognition for increasing transparency and openness. It is common practice to use metrics and indices to evaluate a researcher's performance or the quality of their grant applications. Such measures include the number of publications, the number of times these papers are cited and, more recently, the h-index, which measures the number of highly-cited papers the researcher has written. However, academic institutions and funding agencies in Japan have been rather slow to adopt such metrics. In this article, I will outline some of the currently available metrics, and discuss why we need to use such objective indicators of research performance more often in Japan. I will also discuss how to promote the use of metrics and what we should keep in mind when using them, as well as their potential impact on the research community in Japan.

  4. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  5. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

    PubMed

    Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

    2017-08-01

    This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Incorporating Canopy Cover for Airborne-Derived Assessments of Forest Biomass in the Tropical Forests of Cambodia

    PubMed Central

    Singh, Minerva; Evans, Damian; Coomes, David A.; Friess, Daniel A.; Suy Tan, Boun; Samean Nin, Chan

    2016-01-01

    This research examines the role of canopy cover in influencing above ground biomass (AGB) dynamics of an open canopied forest and evaluates the efficacy of individual-based and plot-scale height metrics in predicting AGB variation in the tropical forests of Angkor Thom, Cambodia. The AGB was modeled by including canopy cover from aerial imagery alongside with the two different canopy vertical height metrics derived from LiDAR; the plot average of maximum tree height (Max_CH) of individual trees, and the top of the canopy height (TCH). Two different statistical approaches, log-log ordinary least squares (OLS) and support vector regression (SVR), were used to model AGB variation in the study area. Ten different AGB models were developed using different combinations of airborne predictor variables. It was discovered that the inclusion of canopy cover estimates considerably improved the performance of AGB models for our study area. The most robust model was log-log OLS model comprising of canopy cover only (r = 0.87; RMSE = 42.8 Mg/ha). Other models that approximated field AGB closely included both Max_CH and canopy cover (r = 0.86, RMSE = 44.2 Mg/ha for SVR; and, r = 0.84, RMSE = 47.7 Mg/ha for log-log OLS). Hence, canopy cover should be included when modeling the AGB of open-canopied tropical forests. PMID:27176218

  7. Incorporating Canopy Cover for Airborne-Derived Assessments of Forest Biomass in the Tropical Forests of Cambodia.

    PubMed

    Singh, Minerva; Evans, Damian; Coomes, David A; Friess, Daniel A; Suy Tan, Boun; Samean Nin, Chan

    2016-01-01

    This research examines the role of canopy cover in influencing above ground biomass (AGB) dynamics of an open canopied forest and evaluates the efficacy of individual-based and plot-scale height metrics in predicting AGB variation in the tropical forests of Angkor Thom, Cambodia. The AGB was modeled by including canopy cover from aerial imagery alongside with the two different canopy vertical height metrics derived from LiDAR; the plot average of maximum tree height (Max_CH) of individual trees, and the top of the canopy height (TCH). Two different statistical approaches, log-log ordinary least squares (OLS) and support vector regression (SVR), were used to model AGB variation in the study area. Ten different AGB models were developed using different combinations of airborne predictor variables. It was discovered that the inclusion of canopy cover estimates considerably improved the performance of AGB models for our study area. The most robust model was log-log OLS model comprising of canopy cover only (r = 0.87; RMSE = 42.8 Mg/ha). Other models that approximated field AGB closely included both Max_CH and canopy cover (r = 0.86, RMSE = 44.2 Mg/ha for SVR; and, r = 0.84, RMSE = 47.7 Mg/ha for log-log OLS). Hence, canopy cover should be included when modeling the AGB of open-canopied tropical forests.

  8. Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate

    NASA Astrophysics Data System (ADS)

    Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna

    2018-02-01

    Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.

  9. Guidelines for evaluating performance of oyster habitat restoration

    USGS Publications Warehouse

    Baggett, Lesley P.; Powers, Sean P.; Brumbaugh, Robert D.; Coen, Loren D.; DeAngelis, Bryan M.; Greene, Jennifer K.; Hancock, Boze T.; Morlock, Summer M.; Allen, Brian L.; Breitburg, Denise L.; Bushek, David; Grabowski, Jonathan H.; Grizzle, Raymond E.; Grosholz, Edwin D.; LaPeyre, Megan K.; Luckenbach, Mark W.; McGraw, Kay A.; Piehler, Michael F.; Westby, Stephanie R.; zu Ermgassen, Philine S. E.

    2015-01-01

    Restoration of degraded ecosystems is an important societal goal, yet inadequate monitoring and the absence of clear performance metrics are common criticisms of many habitat restoration projects. Funding limitations can prevent adequate monitoring, but we suggest that the lack of accepted metrics to address the diversity of restoration objectives also presents a serious challenge to the monitoring of restoration projects. A working group with experience in designing and monitoring oyster reef projects was used to develop standardized monitoring metrics, units, and performance criteria that would allow for comparison among restoration sites and projects of various construction types. A set of four universal metrics (reef areal dimensions, reef height, oyster density, and oyster size–frequency distribution) and a set of three universal environmental variables (water temperature, salinity, and dissolved oxygen) are recommended to be monitored for all oyster habitat restoration projects regardless of their goal(s). In addition, restoration goal-based metrics specific to four commonly cited ecosystem service-based restoration goals are recommended, along with an optional set of seven supplemental ancillary metrics that could provide information useful to the interpretation of prerestoration and postrestoration monitoring data. Widespread adoption of a common set of metrics with standardized techniques and units to assess well-defined goals not only allows practitioners to gauge the performance of their own projects but also allows for comparison among projects, which is both essential to the advancement of the field of oyster restoration and can provide new knowledge about the structure and ecological function of oyster reef ecosystems.

  10. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  11. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  12. The Development of New Composite Metrics for the Comprehensive Analytic and Visual Assessment of Hypoglycemia Using the Hypo-Triad.

    PubMed

    Thomas, Andreas; Shin, John; Jiang, Boyi; McMahon, Chantal; Kolassa, Ralf; Vigersky, Robert A

    2018-01-01

    Quantifying hypoglycemia has traditionally been limited to using the frequency of hypoglycemic events during a given time interval using data from blood glucose (BG) testing. However, continuous glucose monitoring (CGM) captures three parameters-a Hypo-Triad-unavailable with BG monitoring that can be used to better characterize hypoglycemia: area under the curve (AUC), time (duration of hypoglycemia), and frequency of daily episodes below a specified threshold. We developed two new analytic metrics to enhance the traditional Hypo-Triad of CGM-derived data to more effectively capture the intensity of hypoglycemia (IntHypo) and overall hypoglycemic environment called the "hypoglycemia risk volume" (HypoRV). We reanalyzed the CGM data from the ASPIRE In-Home study, a randomized, controlled trial of a sensor-integrated pump system with a low glucose threshold suspend feature (SIP+TS), using these new metrics and compared them to standard metrics of hypoglycemia. IntHypo and HypoRV provide additional insights into the benefit of a SIP+TS system on glycemic exposure when compared to the standard reporting methods. In addition, the visual display of these parameters provides a unique and intuitive way to understand the impact of a diabetes intervention on a cohort of subjects as well as on individual patients. The IntHypo and HypoRV are new and enhanced ways of analyzing CGM-derived data in diabetes intervention studies which could lead to new insights in diabetes management. They require validation using existing, ongoing, or planned studies to determine whether they are superior to existing metrics.

  13. Averaged ratio between complementary profiles for evaluating shape distortions of map projections and spherical hierarchical tessellations

    NASA Astrophysics Data System (ADS)

    Yan, Jin; Song, Xiao; Gong, Guanghong

    2016-02-01

    We describe a metric named averaged ratio between complementary profiles to represent the distortion of map projections, and the shape regularity of spherical cells derived from map projections or non-map-projection methods. The properties and statistical characteristics of our metric are investigated. Our metric (1) is a variable of numerical equivalence to both scale component and angular deformation component of Tissot indicatrix, and avoids the invalidation when using Tissot indicatrix and derived differential calculus for evaluating non-map-projection based tessellations where mathematical formulae do not exist (e.g., direct spherical subdivisions), (2) exhibits simplicity (neither differential nor integral calculus) and uniformity in the form of calculations, (3) requires low computational cost, while maintaining high correlation with the results of differential calculus, (4) is a quasi-invariant under rotations, and (5) reflects the distortions of map projections, distortion of spherical cells, and the associated distortions of texels. As an indicator of quantitative evaluation, we investigated typical spherical tessellation methods, some variants of tessellation methods, and map projections. The tessellation methods we evaluated are based on map projections or direct spherical subdivisions. The evaluation involves commonly used Platonic polyhedrons, Catalan polyhedrons, etc. Quantitative analyses based on our metric of shape regularity and an essential metric of area uniformity implied that (1) Uniform Spherical Grids and its variant show good qualities in both area uniformity and shape regularity, and (2) Crusta, Unicube map, and a variant of Unicube map exhibit fairly acceptable degrees of area uniformity and shape regularity.

  14. Test-retest reliability of high angular resolution diffusion imaging acquisition within medial temporal lobe connections assessed via tract based spatial statistics, probabilistic tractography and a novel graph theory metric.

    PubMed

    Kuhn, T; Gullett, J M; Nguyen, P; Boutzoukas, A E; Ford, A; Colon-Perez, L M; Triplett, W; Carney, P R; Mareci, T H; Price, C C; Bauer, R M

    2016-06-01

    This study examined the reliability of high angular resolution diffusion tensor imaging (HARDI) data collected on a single individual across several sessions using the same scanner. HARDI data was acquired for one healthy adult male at the same time of day on ten separate days across a one-month period. Environmental factors (e.g. temperature) were controlled across scanning sessions. Tract Based Spatial Statistics (TBSS) was used to assess session-to-session variability in measures of diffusion, fractional anisotropy (FA) and mean diffusivity (MD). To address reliability within specific structures of the medial temporal lobe (MTL; the focus of an ongoing investigation), probabilistic tractography segmented the Entorhinal cortex (ERc) based on connections with Hippocampus (HC), Perirhinal (PRc) and Parahippocampal (PHc) cortices. Streamline tractography generated edge weight (EW) metrics for the aforementioned ERc connections and, as comparison regions, connections between left and right rostral and caudal anterior cingulate cortex (ACC). Coefficients of variation (CoV) were derived for the surface area and volumes of these ERc connectivity-defined regions (CDR) and for EW across all ten scans, expecting that scan-to-scan reliability would yield low CoVs. TBSS revealed no significant variation in FA or MD across scanning sessions. Probabilistic tractography successfully reproduced histologically-verified adjacent medial temporal lobe circuits. Tractography-derived metrics displayed larger ranges of scanner-to-scanner variability. Connections involving HC displayed greater variability than metrics of connection between other investigated regions. By confirming the test retest reliability of HARDI data acquisition, support for the validity of significant results derived from diffusion data can be obtained.

  15. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  16. Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation.

    PubMed

    Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix

    2016-03-01

    To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams

    USGS Publications Warehouse

    Black, R.W.; Moran, P.W.; Frankforter, J.D.

    2011-01-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).

  18. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams.

    PubMed

    Black, Robert W; Moran, Patrick W; Frankforter, Jill D

    2011-04-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.

  19. Channel MAC Protocol for Opportunistic Communication in Ad Hoc Wireless Networks

    NASA Astrophysics Data System (ADS)

    Ashraf, Manzur; Jayasuriya, Aruna; Perreau, Sylvie

    2008-12-01

    Despite significant research effort, the performance of distributed medium access control methods has failed to meet theoretical expectations. This paper proposes a protocol named "Channel MAC" performing a fully distributed medium access control based on opportunistic communication principles. In this protocol, nodes access the channel when the channel quality increases beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad." We derive an analytical throughput limit for Channel MAC in a shared multiple access environment. Furthermore, three performance metrics of Channel MAC—throughput, fairness, and delay—are analysed in single hop and multihop scenarios using NS2 simulations. The simulation results show throughput performance improvement of up to 130% with Channel MAC over IEEE 802.11. We also show that the severe resource starvation problem (unfairness) of IEEE 802.11 in some network scenarios is reduced by the Channel MAC mechanism.

  20. GPS Device Testing Based on User Performance Metrics

    DOT National Transportation Integrated Search

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  1. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE PAGES

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.; ...

    2017-09-20

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  2. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  3. Covariant electrodynamics in linear media: Optical metric

    NASA Astrophysics Data System (ADS)

    Thompson, Robert T.

    2018-03-01

    While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.

  4. Analytic variance estimates of Swank and Fano factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less

  5. Relevance of motion-related assessment metrics in laparoscopic surgery.

    PubMed

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  6. Theoretical peak performance and optical constraints for the deflection of an S-type asteroid with a continuous wave laser

    NASA Astrophysics Data System (ADS)

    Thiry, Nicolas; Vasile, Massimiliano

    2017-03-01

    This paper presents a theoretical model to evaluate the thrust generated by a continuous wave (CW) laser, operating at moderate intensity (<100 GW/m2), ablating an S-type asteroid made of Forsterite. The key metric to assess the performance of the laser system is the thrust coupling coefficient which is given by the ratio between thrust and associated optical power. Three different models are developed in the paper: a one dimensional steady state model, a full 3D steady state model and a one dimensional model accounting for transient effects resulting from the tumbling motion of the asteroid. The results obtained with these models are used to derive key requirements and constraints on the laser system that allow approaching the ideal performance in a realistic case.

  7. Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs. Research Report. ETS RR-17-20

    ERIC Educational Resources Information Center

    Ramanarayanan, Vikram; Lange, Patrick; Evanini, Keelan; Molloy, Hillary; Tsuprun, Eugene; Qian, Yao; Suendermann-Oeft, David

    2017-01-01

    Predicting and analyzing multimodal dialog user experience (UX) metrics, such as overall call experience, caller engagement, and latency, among other metrics, in an ongoing manner is important for evaluating such systems. We investigate automated prediction of multiple such metrics collected from crowdsourced interactions with an open-source,…

  8. JPDO Portfolio Analysis of NextGen

    DTIC Science & Technology

    2009-09-01

    runways. C. Metrics The JPDO Interagency Portfolio & Systems Analysis ( IPSA ) division continues to coordinate, develop, and refine the metrics and...targets associated with the NextGen initiatives with the partner agencies & stakeholder communities. IPSA has formulated a set of top-level metrics as...metrics are calculated from system performance measures that constitute outputs of the American Institute of Aeronautics and Astronautics 8 IPSA

  9. Light Water Reactor Sustainability Program Operator Performance Metrics for Control Room Modernization: A Practical Guide for Early Design Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Boring; Roger Lew; Thomas Ulrich

    2014-03-01

    As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less

  10. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  11. Performance metrics for the assessment of satellite data products: an ocean color case study

    PubMed Central

    Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy

    2018-01-01

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296

  12. The role of spring and autumn phenological switches on spatiotemporal variation in temperate and boreal forest C balance: A FLUXNET synthesis

    NASA Astrophysics Data System (ADS)

    Richardson, A. D.; Reichstein, M.; Piao, S.; Ciais, P.; Luyssaert, S.; Stockli, R.; Friedl, M.; Gobron, N.; Fluxnet Site Pis, 21

    2009-04-01

    In temperate and boreal ecosystems, phenological transitions (particularly the timing of spring onset and autumn senescence) are thought to represent a major control on spatial and temporal variation in forest carbon sequestration. To investigate these patterns, we analyzed 153 site-years of data from the FLUXNET ‘La Thuile' database. Eddy covariance measurements of surface-atmosphere exchanges of carbon and water from 21 research sites at latitudes from 36°N to 67°N were used in the synthesis. We defined a range of phenological indicators based on the first (spring) and last (autumn) dates of (1) C source/sink transitions (‘carbon uptake period'); (2) measurable photosynthetic uptake (‘physiologically active period'); (3) relative thresholds for latent heat (evapotranspiration) flux; (4) phenological thresholds derived from a range of remote sensing products (JRC fAPAR, MOD12Q2, and the PROGNOSTIC model with MODIS data assimilation); and (5) a climatological metric based on the date where soil temperature equals mean annual air temperature. We then tested whether site-level flux anomalies were significantly correlated with phenological anomalies across these metrics, and whether the slopes of these relationships (representing the sensitivity to phenological variation) differed between deciduous broadleaf (DBF) and evergreen needleleaf (ENF) forests. Within sites, interannual variation in most phenological metrics was about 5-10 d, compared to 10-30 d across sites. Both spatial and temporal phenological variation were consistently larger at ENF, compared to DBF, sites. Averaged across metrics, phenological variability was roughly comparable in spring and autumn, both across (17 d) and within (9 d) sites. However, patterns of interannual variation in fluxes were less well explained by the derived phenological metrics than were patterns of spatial variation in fluxes. Also, the observed pattern strongly depended on the metric used, with flux-derived metrics generally explaining more, and remote sensing-derived metrics generally explaining less, of the variation in flux anomalies. We found that GPP (gross primary productivity) was consistently more sensitive (both in terms of magnitude and statistical significance; ≈3 g C m-2 d-1 for DBF and ≈2 g C m-2 d-1 for ENF) to phenology than was Reco (ecosystem respiration), which meant that NEP (net ecosystem productivity) tended to be increased both by earlier springs and later autumns. Without exception, when the difference between DBF and ENF in the sensitivity to phenological anomalies was statistically significant, DBF sensitivity was always larger in absolute magnitude than ENF sensitivity. Phenology explained a much larger fraction of the variation in fluxes across sites compared to within sites. Across sites, the rate of increase in GPP with an "exta" day in spring (≈10 g C m-2 d-1) was much larger than in autumn (≈3 g C m-2 d-1). Furthermore, a one-day increase in growing season length across sites increased annual NEP by just ≈2 g C m-2 d-1; this resulted from an increase in GPP of ≈6 g C m-2 d-1 being offset by an increase in RE of ≈4 g C m-2 d-1. In general, there was no statistically significant difference between DBF and ENF in the sensitivity to spatial variation in phenology for either NEP or the component fluxes GPP and Reco. In relation to both within- and across-site variation in phenology and fluxes, the results obtained tended to depend on the phenological metric used, i.e. definition of "start" and "end" of growing season, emphasizing the need for improved understanding of the relationships between these different metrics and ecosystem processes. Furthermore, the differences in flux-phenology relationships in the context of spatial and temporal variation in phenology raise questions about using results from either short-term or space-for-time studies to anticipate responses to future climate change.

  13. Clustered-dot halftoning with direct binary search.

    PubMed

    Goyal, Puneet; Gupta, Madhur; Staelin, Carl; Fischer, Mani; Shacham, Omri; Allebach, Jan P

    2013-02-01

    In this paper, we present a new algorithm for aperiodic clustered-dot halftoning based on direct binary search (DBS). The DBS optimization framework has been modified for designing clustered-dot texture, by using filters with different sizes in the initialization and update steps of the algorithm. Following an intuitive explanation of how the clustered-dot texture results from this modified framework, we derive a closed-form cost metric which, when minimized, equivalently generates stochastic clustered-dot texture. An analysis of the cost metric and its influence on the texture quality is presented, which is followed by a modification to the cost metric to reduce computational cost and to make it more suitable for screen design.

  14. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  15. Feasibility of Intravoxel Incoherent Motion for Differentiating Benign and Malignant Thyroid Nodules.

    PubMed

    Tan, Hui; Chen, Jun; Zhao, Yi Ling; Liu, Jin Huan; Zhang, Liang; Liu, Chang Sheng; Huang, Dongjie

    2018-06-13

    This study aimed to preliminarily investigate the feasibility of intravoxel incoherent motion (IVIM) theory in the differential diagnosis of benign and malignant thyroid nodules. Forty-five patients with 56 confirmed thyroid nodules underwent preoperative routine magnetic resonance imaging and IVIM diffusion-weighted imaging. The histopathologic diagnosis was confirmed by surgery. Apparent diffusion coefficient (ADC), perfusion fraction f, diffusivity D, and pseudo-diffusivity D* were quantified. Independent samples t test of IVIM-derived metrics were conducted between benign and malignant nodules. Receiver-operating characteristic analyses were performed to determine the optimal thresholds as well as the sensitivity and specificity for differentiating. Significant intergroup difference was observed in ADC, D, D*, and f (p < 0.001). Malignant tumors featured significantly lower ADC, D and D* values and a higher f value than that of benign nodules. The ADC, D, and D* could distinguish the benign from malignant thyroid nodules, and parameter f differentiate the malignant tumors from benign nodules. The values of the area under the curve for parameter ADC, D, and D* were 0.784 (p = 0.001), 0.795 (p = 0.001), and 0.850 (p < 0.001), separately, of which the area under the curve of f value was the maximum for identifying the malignant from benign nodules, which was 0.841 (p < 0.001). This study suggested that ADC and IVIM-derived metrics, including D, D*, and f, could potentially serve as noninvasive predictors for the preoperative differentiating of thyroid nodules, and f value performed best in identifying the malignant from benign nodules among these parameters. Copyright © 2018 Academic Radiology. Published by Elsevier Inc. All rights reserved.

  16. Assessment of in vivo microstructure alterations in gray matter using DKI in Internet gaming addiction.

    PubMed

    Sun, Yawen; Sun, Jinhua; Zhou, Yan; Ding, Weina; Chen, Xue; Zhuang, Zhiguo; Xu, Jianrong; Du, Yasong

    2014-10-24

    The aim of the current study was to investigate the utility of diffusional kurtosis imaging (DKI) in the detection of gray matter (GM) alterations in people suffering from Internet Gaming Addiction (IGA). DKI was applied to 18 subjects with IGA and to 21 healthy controls (HC). Whole-brain voxel-based analyses were performed with the following derived parameters: mean kurtosis metrics (MK), radial kurtosis (K⊥), and axial kurtosis (K//). A significance threshold was set at P <0.05, AlphaSim corrected. Pearson's correlation was performed to investigate the correlations between the Chen Internet Addiction Scale (CIAS) and the DKI-derived metrics of regions that differed between groups. Additionally, we used voxel-based morphometry (VBM) to detect GM-volume differences between the two groups. Compared with the HC group, the IGA group demonstrated diffusional kurtosis parameters that were significantly less in GM of the right anterolateral cerebellum, right inferior and superior temporal gyri, right supplementary motor area, middle occipital gyrus, right precuneus, postcentral gyrus, right inferior frontal gyrus, left lateral lingual gyrus, left paracentral lobule, left anterior cingulate cortex, and median cingulate cortex. The bilateral fusiform gyrus, insula, posterior cingulate cortex (PCC), and thalamus also exhibited less diffusional kurtosis in the IGA group. MK in the left PCC and K⊥ in the right PCC were positively correlated with CIAS scores. VBM showed that IGA subjects had higher GM volume in the right inferior and middle temporal gyri, and right parahippocampal gyrus, and lower GM volume in the left precentral gyrus. The lower diffusional kurtosis parameters in IGA suggest multiple differences in brain microstructure, which may contribute to the underlying pathophysiology of IGA. DKI may provide sensitive imaging biomarkers for assessing IGA severity.

  17. Metrics for assessing the performance of morphodynamic models of braided rivers at event and reach scales

    NASA Astrophysics Data System (ADS)

    Williams, Richard; Measures, Richard; Hicks, Murray; Brasington, James

    2017-04-01

    Advances in geomatics technologies have transformed the monitoring of reach-scale (100-101 km) river morphodynamics. Hyperscale Digital Elevation Models (DEMs) can now be acquired at temporal intervals that are commensurate with the frequencies of high-flow events that force morphological change. The low vertical errors associated with such DEMs enable DEMs of Difference (DoDs) to be generated to quantify patterns of erosion and deposition, and derive sediment budgets using the morphological approach. In parallel with reach-scale observational advances, high-resolution, two-dimensional, physics-based numerical morphodynamic models are now computationally feasible for unsteady, reach-scale simulations. In light of this observational and predictive progress, there is a need to identify appropriate metrics that can be extracted from DEMs and DoDs to assess model performance. Nowhere is this more pertinent than in braided river environments, where numerous mobile channels that intertwine around mid-channel bars result in complex patterns of erosion and deposition, thus making model assessment particularly challenging. This paper identifies and evaluates a range of morphological and morphological-change metrics that can be used to assess predictions of braided river morphodynamics at the timescale of single storm events. A depth-averaged, mixed-grainsize Delft3D morphodynamic model was used to simulate morphological change during four discrete high-flow events, ranging from 91 to 403 m3s-1, along a 2.5 x 0.7 km reach of the braided, gravel-bed Rees River, New Zealand. Pre- and post-event topographic surveys, using a fusion of Terrestrial Laser Scanning and optical-empirical bathymetric mapping, were used to produce 0.5 m resolution DEMs and DoDs. The pre- and post-event DEMs for a moderate (227m3s-1) high-flow event were used to calibrate the model. DEMs and DoDs from the other three high-flow events were used for model assessment using two approaches. First, "morphological" metrics were applied to compare observed and predicted post-event DEMs. These metrics include measures of confluence and bifurcation node density, bar shape, braiding intensity, and topographic comparisons using a form of the Brier Skill Score and cumulative frequency distributions of rugosity. Second, "morphological change" metrics were used to compare observed and predicted morphological change. These metrics included the extent of the morphologically active area, pairwise comparisons of morphological change (using kappa and fuzzy kappa statistics), and comparisons between vertical morphological change magnitude and elevation distribution. Results indicate that those metrics that assess characteristic features of braiding, rather than making direct comparisons, are most useful for assessing reach-scale braided river morphodynamic models. Together, the metrics indicate that there was a general affinity between observed and predicted braided river morphodynamics, both during small and large magnitude high-flow events. These results thus demonstrate how high-resolution, reach-scale, natural experiment datasets can be used to assess the efficacy of morphological models in predicting realistic patterns of erosion and deposition. This lays the foundation for the development and assessment of decadal scale morphodynamic models and their use in adaptive river basin management.

  18. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  19. Snow removal performance metrics : final report.

    DOT National Transportation Integrated Search

    2017-05-01

    This document is the final report for the Clear Roads project entitled Snow Removal Performance Metrics. The project team was led by researchers at Washington State University on behalf of Clear Roads, an ongoing pooled fund research effort focused o...

  20. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  1. Modified gravity (MOG), the speed of gravitational radiation and the event GW170817/GRB170817A

    NASA Astrophysics Data System (ADS)

    Green, M. A.; Moffat, J. W.; Toth, V. T.

    2018-05-01

    Modified gravity (MOG) is a covariant, relativistic, alternative gravitational theory whose field equations are derived from an action that supplements the spacetime metric tensor with vector and scalar fields. Both gravitational (spin 2) and electromagnetic waves travel on null geodesics of the theory's one metric. MOG satisfies the weak equivalence principle and is consistent with observations of the neutron star merger and gamma ray burster event GW170817/GRB170817A.

  2. Behavioral Economic Measures of Alcohol Reward Value as Problem Severity Indicators in College Students

    PubMed Central

    Skidmore, Jessica R.; Murphy, James G.; Martens, Matthew P.

    2014-01-01

    The aims of the current study were to examine the associations among behavioral economic measures of alcohol value derived from three distinct measurement approaches, and to evaluate their respective relations with traditional indicators of alcohol problem severity in college drinkers. Five behavioral economic metrics were derived from hypothetical demand curves that quantify reward value by plotting consumption and expenditures as a function of price, another metric measured proportional behavioral allocation and enjoyment related to alcohol versus other activities, and a final metric measured relative discretionary expenditures on alcohol. The sample included 207 heavy drinking college students (53% female) who were recruited through an on-campus health center or university courses. Factor analysis revealed that the alcohol valuation construct comprises two factors: one factor that reflects participants’ levels of alcohol price sensitivity (demand persistence), and a second factor that reflects participants’ maximum consumption and monetary and behavioral allocation towards alcohol (amplitude of demand). The demand persistence and behavioral allocation metrics demonstrated the strongest and most consistent multivariate relations with alcohol-related problems, even when controlling for other well-established predictors. The results suggest that behavioral economic indices of reward value show meaningful relations with alcohol problem severity in young adults. Despite the presence of some gender differences, these measures appear to be useful problem indicators for men and women. PMID:24749779

  3. Evaluation of Ion Mobility-Mass Spectrometry for Comparative Analysis of Monoclonal Antibodies

    NASA Astrophysics Data System (ADS)

    Ferguson, Carly N.; Gucinski-Ruth, Ashley C.

    2016-05-01

    Analytical techniques capable of detecting changes in structure are necessary to monitor the quality of monoclonal antibody drug products. Ion mobility mass spectrometry offers an advanced mode of characterization of protein higher order structure. In this work, we evaluated the reproducibility of ion mobility mass spectrometry measurements and mobiligrams, as well as the suitability of this approach to differentiate between and/or characterize different monoclonal antibody drug products. Four mobiligram-derived metrics were identified to be reproducible across a multi-day window of analysis. These metrics were further applied to comparative studies of monoclonal antibody drug products representing different IgG subclasses, manufacturers, and lots. These comparisons resulted in some differences, based on the four metrics derived from ion mobility mass spectrometry mobiligrams. The use of collision-induced unfolding resulted in more observed differences. Use of summed charge state datasets and the analysis of metrics beyond drift time allowed for a more comprehensive comparative study between different monoclonal antibody drug products. Ion mobility mass spectrometry enabled detection of differences between monoclonal antibodies with the same target protein but different production techniques, as well as products with different targets. These differences were not always detectable by traditional collision cross section studies. Ion mobility mass spectrometry, and the added separation capability of collision-induced unfolding, was highly reproducible and remains a promising technique for advanced analytical characterization of protein therapeutics.

  4. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.

  5. Dual-window dual-bandwidth spectroscopic optical coherence tomography metric for qualitative scatterer size differentiation in tissues.

    PubMed

    Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng

    2012-09-01

    This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.

  6. Technical Interchange Meeting Guidelines Breakout

    NASA Technical Reports Server (NTRS)

    Fong, Rob

    2002-01-01

    Along with concept developers, the Systems Evaluation and Assessment (SEA) sub-element of VAMS will develop those scenarios and metrics required for testing the new concepts that reside within the System-Level Integrated Concepts (SLIC) sub-element in the VAMS project. These concepts will come from the NRA process, space act agreements, a university group, and other NASA researchers. The emphasis of those concepts is to increase capacity while at least maintaining the current safety level. The concept providers will initially develop their own scenarios and metrics for self-evaluation. In about a year, the SEA sub-element will become responsible for conducting initial evaluations of the concepts using a common scenario and metric set. This set may derive many components from the scenarios and metrics used by the concept providers. Ultimately, the common scenario\\metric set will be used to help determine the most feasible and beneficial concepts. A set of 15 questions and issues, discussed below, pertaining to the scenario and metric set, and its use for assessing concepts, was submitted by the SEA sub-element for consideration during the breakout session. The questions were divided among the three breakout groups. Each breakout group deliberated on its set of questions and provided a report on its discussion.

  7. Palatini versus metric formulation in higher-curvature gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borunda, Monica; Janssen, Bert; Bastero-Gil, Mar, E-mail: mborunda@ugr.es, E-mail: bjanssen@ugr.es, E-mail: mbg@ugr.es

    2008-11-15

    We compare the metric and the Palatini formalism to obtain the Einstein equations in the presence of higher-order curvature corrections that consist of contractions of the Riemann tensor, but not of its derivatives. We find that there is a class of theories for which the two formalisms are equivalent. This class contains the Palatini version of Lovelock theory, but also more Lagrangians that are not Lovelock, but respect certain symmetries. For the general case, we find that imposing the Levi-Civita connection as an ansatz, the Palatini formalism is contained within the metric formalism, in the sense that any solution ofmore » the former also appears as a solution of the latter, but not necessarily the other way around. Finally we give the conditions the solutions of the metric equations should satisfy in order to solve the Palatini equations.« less

  8. Sediment transport-based metrics of wetland stability

    USGS Publications Warehouse

    Ganju, Neil K.; Kirwan, Matthew L.; Dickhudt, Patrick J.; Guntenspergen, Glenn R.; Cahoon, Donald R.; Kroeger, Kevin D.

    2015-01-01

    Despite the importance of sediment availability on wetland stability, vulnerability assessments seldom consider spatiotemporal variability of sediment transport. Models predict that the maximum rate of sea level rise a marsh can survive is proportional to suspended sediment concentration (SSC) and accretion. In contrast, we find that SSC and accretion are higher in an unstable marsh than in an adjacent stable marsh, suggesting that these metrics cannot describe wetland vulnerability. Therefore, we propose the flood/ebb SSC differential and organic-inorganic suspended sediment ratio as better vulnerability metrics. The unstable marsh favors sediment export (18 mg L−1 higher on ebb tides), while the stable marsh imports sediment (12 mg L−1 higher on flood tides). The organic-inorganic SSC ratio is 84% higher in the unstable marsh, and stable isotopes indicate a source consistent with marsh-derived material. These simple metrics scale with sediment fluxes, integrate spatiotemporal variability, and indicate sediment sources.

  9. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  10. Correlation between diffusion kurtosis and NODDI metrics in neonates and young children

    NASA Astrophysics Data System (ADS)

    Ahmed, Shaheen; Wang, Zhiyue J.; Chia, Jonathan M.; Rollins, Nancy K.

    2016-03-01

    Diffusion Tensor Imaging (DTI) uses single shell gradient encoding scheme for studying brain tissue diffusion. NODDI (Neurite Orientation Dispersion and Density Imaging) incorporates a gradient scheme with multiple b-values which is used to characterize neurite density and coherence of neuron fiber orientations. Similarly, the diffusion kurtosis imaging also uses a multiple shell scheme to quantify non-Gaussian diffusion but does not assume a tissue model like NODDI. In this study we investigate the connection between metrics derived by NODDI and DKI in children with ages from 46 weeks to 6 years. We correlate the NODDI metrics and Kurtosis measures from the same ROIs in multiple brain regions. We compare the range of these metrics between neonates (46 - 47 weeks), infants (2 -10 months) and young children (2 - 6 years). We find that there exists strong correlation between neurite density vs. mean kurtosis, orientation dispersion vs. kurtosis fractional anisotropy (FA) in pediatric brain imaging.

  11. Evaluation of BLAST-based edge-weighting metrics used for homology inference with the Markov Clustering algorithm.

    PubMed

    Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F

    2015-07-10

    Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.

  12. Characterizing fire-related spatial patterns in fire-prone ecosystems using optical and microwave remote sensing

    NASA Astrophysics Data System (ADS)

    Henry, Mary Catherine

    The use of active and passive remote sensing systems for relating forest spatial patterns to fire history was tested over one of the Arizona Sky Islands. Using Landsat Thematic Mapper (TM), Shuttle Imaging Radar (SIR-C), and data fusion I examined the relationship between landscape metrics and a range of fire history characteristics. Each data type (TM, SIR-C, and fused) was processed in the following manner: each band, channel, or derived feature was simplified to a thematic layer and landscape statistics were calculated for plots with known fire history. These landscape metrics were then correlated with fire history characteristics, including number of fire-free years in a given time period, mean fire-free interval, and time since fire. Results from all three case studies showed significant relationships between fire history and forest spatial patterns. Data fusion performed as well or better than Landsat TM alone, and better than SIR-C alone. These comparisons were based on number and strength of significant correlations each method achieved. The landscape metric that was most consistent and obtained the greatest number of significant correlations was Shannon's Diversity Index. Results also agreed with field-based research that has linked higher fire frequency to increased landscape diversity and patchiness. An additional finding was that the fused data seem to detect fire-related spatial patterns over a range of scales.

  13. Unmanned aircraft system-derived crop height and normalized difference vegetation index metrics for sorghum yield and aphid stress assessment

    NASA Astrophysics Data System (ADS)

    Stanton, Carly; Starek, Michael J.; Elliott, Norman; Brewer, Michael; Maeda, Murilo M.; Chu, Tianxing

    2017-04-01

    A small, fixed-wing unmanned aircraft system (UAS) was used to survey a replicated small plot field experiment designed to estimate sorghum damage caused by an invasive aphid. Plant stress varied among 40 plots through manipulation of aphid densities. Equipped with a consumer-grade near-infrared camera, the UAS was flown on a recurring basis over the growing season. The raw imagery was processed using structure-from-motion to generate normalized difference vegetation index (NDVI) maps of the fields and three-dimensional point clouds. NDVI and plant height metrics were averaged on a per plot basis and evaluated for their ability to identify aphid-induced plant stress. Experimental soil signal filtering was performed on both metrics, and a method filtering low near-infrared values before NDVI calculation was found to be the most effective. UAS NDVI was compared with NDVI from sensors onboard a manned aircraft and a tractor. The correlation results showed dependence on the growth stage. Plot averages of NDVI and canopy height values were compared with per-plot yield at 14% moisture and aphid density. The UAS measures of plant height and NDVI were correlated to plot averages of yield and insect density. Negative correlations between aphid density and NDVI were seen near the end of the season in the most damaged crops.

  14. The LEAP™ Gesture Interface Device and Take-Home Laparoscopic Simulators: A Study of Construct and Concurrent Validity.

    PubMed

    Partridge, Roland W; Brown, Fraser S; Brennan, Paul M; Hennessey, Iain A M; Hughes, Mark A

    2016-02-01

    To assess the potential of the LEAP™ infrared motion tracking device to map laparoscopic instrument movement in a simulated environment. Simulator training is optimized when augmented by objective performance feedback. We explore the potential LEAP has to provide this in a way compatible with affordable take-home simulators. LEAP and the previously validated InsTrac visual tracking tool mapped expert and novice performances of a standardized simulated laparoscopic task. Ability to distinguish between the 2 groups (construct validity) and correlation between techniques (concurrent validity) were the primary outcome measures. Forty-three expert and 38 novice performances demonstrated significant differences in LEAP-derived metrics for instrument path distance (P < .001), speed (P = .002), acceleration (P < .001), motion smoothness (P < .001), and distance between the instruments (P = .019). Only instrument path distance demonstrated a correlation between LEAP and InsTrac tracking methods (novices: r = .663, P < .001; experts: r = .536, P < .001). Consistency of LEAP tracking was poor (average % time hands not tracked: 31.9%). The LEAP motion device is able to track the movement of hands using instruments in a laparoscopic box simulator. Construct validity is demonstrated by its ability to distinguish novice from expert performances. Only time and instrument path distance demonstrated concurrent validity with an existing tracking method however. A number of limitations to the tracking method used by LEAP have been identified. These need to be addressed before it can be considered an alternative to visual tracking for the delivery of objective performance metrics in take-home laparoscopic simulators. © The Author(s) 2015.

  15. Koszul information geometry and Souriau Lie group thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbaresco, Frédéric, E-mail: frederic.barbaresco@thalesgroup.com

    The François Massieu 1869 idea to derive some mechanical and thermal properties of physical systems from 'Characteristic Functions', was developed by Gibbs and Duhem in thermodynamics with the concept of potentials, and introduced by Poincaré in probability. This paper deals with generalization of this Characteristic Function concept by Jean-Louis Koszul in Mathematics and by Jean-Marie Souriau in Statistical Physics. The Koszul-Vinberg Characteristic Function (KVCF) on convex cones will be presented as cornerstone of 'Information Geometry' theory, defining Koszul Entropy as Legendre transform of minus the logarithm of KVCF, and Fisher Information Metrics as hessian of these dual functions, invariant bymore » their automorphisms. In parallel, Souriau has extended the Characteristic Function in Statistical Physics looking for other kinds of invariances through co-adjoint action of a group on its momentum space, defining physical observables like energy, heat and momentum as pure geometrical objects. In covariant Souriau model, Gibbs equilibriums states are indexed by a geometric parameter, the Geometric (Planck) Temperature, with values in the Lie algebra of the dynamical Galileo/Poincaré groups, interpreted as a space-time vector, giving to the metric tensor a null Lie derivative. Fisher Information metric appears as the opposite of the derivative of Mean 'Moment map' by geometric temperature, equivalent to a Geometric Capacity or Specific Heat. These elements has been developed by author in [10][11].« less

  16. Land surface phenology

    USGS Publications Warehouse

    Hanes, Jonathan M.; Liang, Liang; Morisette, Jeffrey T.

    2013-01-01

    Certain vegetation types (e.g., deciduous shrubs, deciduous trees, grasslands) have distinct life cycles marked by the growth and senescence of leaves and periods of enhanced photosynthetic activity. Where these types exist, recurring changes in foliage alter the reflectance of electromagnetic radiation from the land surface, which can be measured using remote sensors. The timing of these recurring changes in reflectance is called land surface phenology (LSP). During recent decades, a variety of methods have been used to derive LSP metrics from time series of reflectance measurements acquired by satellite-borne sensors. In contrast to conventional phenology observations, LSP metrics represent the timing of reflectance changes that are driven by the aggregate activity of vegetation within the areal unit measured by the satellite sensor and do not directly provide information about the phenology of individual plants, species, or their phenophases. Despite the generalized nature of satellite sensor-derived measurements, they have proven useful for studying changes in LSP associated with various phenomena. This chapter provides a detailed overview of the use of satellite remote sensing to monitor LSP. First, the theoretical basis for the application of satellite remote sensing to the study of vegetation phenology is presented. After establishing a theoretical foundation for LSP, methods of deriving and validating LSP metrics are discussed. This chapter concludes with a discussion of major research findings and current and future research directions.

  17. Signal-to-Noise Ratio in PVT Performance as a Cognitive Measure of the Effect of Sleep Deprivation on the Fidelity of Information Processing.

    PubMed

    Chavali, Venkata P; Riedy, Samantha M; Van Dongen, Hans P A

    2017-03-01

    There is a long-standing debate about the best way to characterize performance deficits on the psychomotor vigilance test (PVT), a widely used assay of cognitive impairment in human sleep deprivation studies. Here, we address this issue through the theoretical framework of the diffusion model and propose to express PVT performance in terms of signal-to-noise ratio (SNR). From the equations of the diffusion model for one-choice, reaction-time tasks, we derived an expression for a novel SNR metric for PVT performance. We also showed that LSNR-a commonly used log-transformation of SNR-can be reasonably well approximated by a linear function of the mean response speed, LSNRapx. We computed SNR, LSNR, LSNRapx, and number of lapses for 1284 PVT sessions collected from 99 healthy young adults who participated in laboratory studies with 38 hr of total sleep deprivation. All four PVT metrics captured the effects of time awake and time of day on cognitive performance during sleep deprivation. The LSNR had the best psychometric properties, including high sensitivity, high stability, high degree of normality, absence of floor and ceiling effects, and no bias in the meaning of change scores related to absolute baseline performance. The theoretical motivation of SNR and LSNR permits quantitative interpretation of PVT performance as an assay of the fidelity of information processing in cognition. Furthermore, with a conceptual and statistical meaning grounded in information theory and generalizable across scientific fields, LSNR in particular is a useful tool for systems-integrated fatigue risk management. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  18. Bayesian model evidence as a model evaluation metric

    NASA Astrophysics Data System (ADS)

    Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang

    2017-04-01

    When building environmental systems models, we are typically confronted with the questions of how to choose an appropriate model (i.e., which processes to include or neglect) and how to measure its quality. Various metrics have been proposed that shall guide the modeller towards a most robust and realistic representation of the system under study. Criteria for evaluation often address aspects of accuracy (absence of bias) or of precision (absence of unnecessary variance) and need to be combined in a meaningful way in order to address the inherent bias-variance dilemma. We suggest using Bayesian model evidence (BME) as a model evaluation metric that implicitly performs a tradeoff between bias and variance. BME is typically associated with model weights in the context of Bayesian model averaging (BMA). However, it can also be seen as a model evaluation metric in a single-model context or in model comparison. It combines a measure for goodness of fit with a penalty for unjustifiable complexity. Unjustifiable refers to the fact that the appropriate level of model complexity is limited by the amount of information available for calibration. Derived in a Bayesian context, BME naturally accounts for measurement errors in the calibration data as well as for input and parameter uncertainty. BME is therefore perfectly suitable to assess model quality under uncertainty. We will explain in detail and with schematic illustrations what BME measures, i.e. how complexity is defined in the Bayesian setting and how this complexity is balanced with goodness of fit. We will further discuss how BME compares to other model evaluation metrics that address accuracy and precision such as the predictive logscore or other model selection criteria such as the AIC, BIC or KIC. Although computationally more expensive than other metrics or criteria, BME represents an appealing alternative because it provides a global measure of model quality. Even if not applicable to each and every case, we aim at stimulating discussion about how to judge the quality of hydrological models in the presence of uncertainty in general by dissecting the mechanism behind BME.

  19. Accounting for the phase, spatial frequency and orientation demands of the task improves metrics based on the visual Strehl ratio.

    PubMed

    Young, Laura K; Love, Gordon D; Smithson, Hannah E

    2013-09-20

    Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Bayesian performance metrics and small system integration in recent homeland security and defense applications

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas

    2010-04-01

    In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.

  1. Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei

    Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.

  2. Performance of the METRIC model in estimating evapotranspiration fluxes over an irrigated field in Saudi Arabia using Landsat-8 images

    NASA Astrophysics Data System (ADS)

    Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.

    2017-12-01

    Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.

  3. Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media

    NASA Astrophysics Data System (ADS)

    Park, Ju-Won; Kim, JongWon

    2004-10-01

    As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.

  4. Propulsion Technology Lifecycle Operational Analysis

    NASA Technical Reports Server (NTRS)

    Robinson, John W.; Rhodes, Russell E.

    2010-01-01

    The paper presents the results of a focused effort performed by the members of the Space Propulsion Synergy Team (SPST) Functional Requirements Sub-team to develop propulsion data to support Advanced Technology Lifecycle Analysis System (ATLAS). This is a spreadsheet application to analyze the impact of technology decisions at a system-of-systems level. Results are summarized in an Excel workbook we call the Technology Tool Box (TTB). The TTB provides data for technology performance, operations, and programmatic parameters in the form of a library of technical information to support analysis tools and/or models. The lifecycle of technologies can be analyzed from this data and particularly useful for system operations involving long running missions. The propulsion technologies in this paper are listed against Chemical Rocket Engines in a Work Breakdown Structure (WBS) format. The overall effort involved establishing four elements: (1) A general purpose Functional System Breakdown Structure (FSBS). (2) Operational Requirements for Rocket Engines. (3) Technology Metric Values associated with Operating Systems (4) Work Breakdown Structure (WBS) of Chemical Rocket Engines The list of Chemical Rocket Engines identified in the WBS is by no means complete. It is planned to update the TTB with a more complete list of available Chemical Rocket Engines for United States (US) engines and add the Foreign rocket engines to the WBS which are available to NASA and the Aerospace Industry. The Operational Technology Metric Values were derived by the SPST Sub-team in the form of the TTB and establishes a database for users to help evaluate and establish the technology level of each Chemical Rocket Engine in the database. The Technology Metric Values will serve as a guide to help determine which rocket engine to invest technology money in for future development.

  5. Comparison of oral surgery task performance in a virtual reality surgical simulator and an animal model using objective measures.

    PubMed

    Ioannou, Ioanna; Kazmierczak, Edmund; Stern, Linda

    2015-01-01

    The use of virtual reality (VR) simulation for surgical training has gathered much interest in recent years. Despite increasing popularity and usage, limited work has been carried out in the use of automated objective measures to quantify the extent to which performance in a simulator resembles performance in the operating theatre, and the effects of simulator training on real world performance. To this end, we present a study exploring the effects of VR training on the performance of dentistry students learning a novel oral surgery task. We compare the performance of trainees in a VR simulator and in a physical setting involving ovine jaws, using a range of automated metrics derived by motion analysis. Our results suggest that simulator training improved the motion economy of trainees without adverse effects on task outcome. Comparison of surgical technique on the simulator with the ovine setting indicates that simulator technique is similar, but not identical to real world technique.

  6. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    PubMed

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  7. Assessing Plant Senescence Reflectance Index retrieved vegetation phenology and its spatiotemporal response to climate change in the Inner Mongolian Grassland

    NASA Astrophysics Data System (ADS)

    Ren, S.; Chen, X.; An, S.

    2016-12-01

    Other than green vegetation indices, Plant Senescence Reflectance Index (PSRI) is sensitive to carotenoids/chlorophyll ratio in plant leaves, and shows a reversed bell curve during the growing season. Up to now, performances of PSRI in monitoring vegetation phenology are still unclear. Here, we used Moderate Resolution Imaging Spectroradiometer data from 2000 to 2011 to determine PSRI-derived start (SOS) and end (EOS) dates of the growing season in the Inner Mongolian Grassland, and validated the reliability of PSRI-derived SOS and EOS dates using Normalized Difference Vegetation Index (NDVI) derived SOS and EOS dates. Then, we conducted temporal and spatial correlation analyses between SOS/EOS date and climatic factors. Moreover, we revealed spatiotemporal patterns of PSRI-derived SOS and EOS dates across the entire research region at pixel scales. Results show that PSRI has similar performance with NDVI in extracting SOS and EOS dates in the Inner Mongolian Grassland. Precipitation regime is the key climate driver of interannual variation of grassland phenology, while temperature and precipitation regimes are the crucial controlling factors of spatial differentiation of grassland phenology. Thus, PSRI-derived vegetation phenology can effectively reflect land surface vegetation dynamics and its response to climate change. Moreover, significant linear trend of PSRI-derived SOS and EOS dates was detected only at small portions of pixels, which is consistent with that of greenup and brownoff dates of herbaceous plant species in the Inner Mongolian Grassland. Overall, PSRI is a useful and robust metric in addition to NDVI for monitoring land surface grassland phenology.

  8. Are we allowing impact factor to have too much impact: The need to reassess the process of academic advancement in pediatric cardiology?

    PubMed

    Loomba, Rohit S; Anderson, Robert H

    2018-03-01

    Impact factor has been used as a metric by which to gauge scientific journals for several years. A metric meant to describe the performance of a journal overall, impact factor has also become a metric used to gauge individual performance as well. This has held true in the field of pediatric cardiology where many divisions utilize impact factor of journals that an individual has published in to help determine the individual's academic achievement. This subsequently can impact the individual's promotion through the academic ranks. We review the purpose of impact factor, its strengths and weaknesses, discuss why impact factor is not a fair metric to apply to individuals, and offer alternative means by which to gauge individual performance for academic promotion. © 2018 Wiley Periodicals, Inc.

  9. Energy performance indicators of wastewater treatment: a field study with 17 Portuguese plants.

    PubMed

    Silva, Catarina; Rosa, Maria João

    2015-01-01

    The energy costs usually represent the second largest part of the running costs of a wastewater treatment plant (WWTP). It is therefore crucial to increase the energy efficiency of these infrastructures and to implement energy management systems, where quantitative performance metrics, such as performance indicators (PIs), play a key role. This paper presents energy PIs which cover the unit energy consumption, production, net use from external sources and costs, and the results used to validate them and derive their reference values. The results of a field study with 17 Portuguese WWTPs (5-year period) were consistent with the results obtained through an international literature survey on the two key parcels of the energy balance--consumption and production. The unit energy consumption showed an overall inverse relation with the volume treated, and the reference values reflect this relation for trickling filters and for activated sludge systems (conventional, with coagulation/filtration (C/F) and with nitrification and C/F). The reference values of electrical energy production were derived from the methane generation potential (converted to electrical energy) and literature data, whereas those of energy net use were obtained by the difference between the energy consumption and production.

  10. TU-AB-BRA-05: Repeatability of [F-18]-NaF PET Imaging Biomarkers for Bone Lesions: A Multicenter Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, C; Bradshaw, T; Perk, T

    2015-06-15

    Purpose: Quantifying the repeatability of imaging biomarkers is critical for assessing therapeutic response. While therapeutic efficacy has been traditionally quantified by SUV metrics, imaging texture features have shown potential for use as quantitative biomarkers. In this study we evaluated the repeatability of quantitative {sup 18}F-NaF PET-derived SUV metrics and texture features in bone lesions from patients in a multicenter study. Methods: Twenty-nine metastatic castrate-resistant prostate cancer patients received whole-body test-retest NaF PET/CT scans from one of three harmonized imaging centers. Bone lesions of volume greater than 1.5 cm{sup 3} were identified and automatically segmented using a SUV>15 threshold. From eachmore » lesion, 55 NaF PET-derived texture features (including first-order, co-occurrence, grey-level run-length, neighbor gray-level, and neighbor gray-tone difference matrix) were extracted. The test-retest repeatability of each SUV metric and texture feature was assessed with Bland-Altman analysis. Results: A total of 315 bone lesions were evaluated. Of the traditional SUV metrics, the repeatability coefficient (RC) was 12.6 SUV for SUVmax, 2.5 SUV for SUVmean, and 4.3 cm{sup 3} for volume. Their respective intralesion coefficients of variation (COVs) were 12%, 17%, and 6%. Of the texture features, COV was lowest for entropy (0.03%) and highest for kurtosis (105%). Lesion intraclass correlation coefficient (ICC) was lowest for maximum correlation coefficient (ICC=0.848), and highest for entropy (ICC=0.985). Across imaging centers, repeatability of texture features and SUV varied. For example, across imaging centers, COV for SUVmax ranged between 11–23%. Conclusion: Many NaF PET-derived SUV metrics and texture features for bone lesions demonstrated high repeatability, such as SUVmax, entropy, and volume. Several imaging texture features demonstrated poor repeatability, such as SUVtotal and SUVstd. These results can be used to establish response criteria for NaF PET-based treatment response assessment. Prostate Cancer Foundation (PCF)« less

  11. Effect of NOAA satellite orbital drift on AVHRR-derived phenological metrics

    USGS Publications Warehouse

    Ji, Lei; Brown, Jesslyn

    2017-01-01

    The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center routinely produces and distributes a remote sensing phenology (RSP) dataset derived from the Advanced Very High Resolution Radiometer (AVHRR) 1-km data compiled from a series of National Oceanic and Atmospheric Administration (NOAA) satellites (NOAA-11, −14, −16, −17, −18, and −19). Each NOAA satellite experienced orbital drift during its duty period, which influenced the AVHRR reflectance measurements. To understand the effect of the orbital drift on the AVHRR-derived RSP dataset, we analyzed the impact of solar zenith angle (SZA) on the RSP metrics in the conterminous United States (CONUS). The AVHRR weekly composites were used to calculate the growing-season median SZA at the pixel level for each year from 1989 to 2014. The results showed that the SZA increased towards the end of each NOAA satellite mission with the highest increasing rate occurring during NOAA-11 (1989–1994) and NOAA-14 (1995–2000) missions. The growing-season median SZA values (44°–60°) in 1992, 1993, 1994, 1999, and 2000 were substantially higher than those in other years (28°–40°). The high SZA in those years caused negative trends in the SZA time series, that were statistically significant (at α = 0.05 level) in 76.9% of the CONUS area. A pixel-based temporal correlation analysis showed that the phenological metrics and SZA were significantly correlated (at α = 0.05 level) in 4.1–20.4% of the CONUS area. After excluding the 5 years with high SZA (>40°) from the analysis, the temporal SZA trend was largely reduced, significantly affecting less than 2% of the study area. Additionally, significant correlation between the phenological metrics and SZA was observed in less than 7% of the study area. Our study concluded that the NOAA satellite orbital drift increased SZA, and in turn, influenced the phenological metrics. Elimination of the years with high median SZA reduced the influence of orbital drift on the RSP time series.

  12. Evaluating true BCI communication rate through mutual information and language models.

    PubMed

    Speier, William; Arnold, Corey; Pouratian, Nader

    2013-01-01

    Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.

  13. Some Metric Properties of Planar Gaussian Free Field

    NASA Astrophysics Data System (ADS)

    Goswami, Subhajit

    In this thesis we study the properties of some metrics arising from two-dimensional Gaussian free field (GFF), namely the Liouville first-passage percolation (Liouville FPP), the Liouville graph distance and an effective resistance metric. In Chapter 1, we define these metrics as well as discuss the motivations for studying them. Roughly speaking, Liouville FPP is the shortest path metric in a planar domain D where the length of a path P is given by ∫Pe gammah(z)|dz| where h is the GFF on D and gamma > 0. In Chapter 2, we present an upper bound on the expected Liouville FPP distance between two typical points for small values of gamma (the near-Euclidean regime). A similar upper bound is derived in Chapter 3 for the Liouville graph distance which is, roughly, the minimal number of Euclidean balls with comparable Liouville quantum gravity (LQG) measure whose union contains a continuous path between two endpoints. Our bounds seem to be in disagreement with Watabiki's prediction (1993) on the random metric of Liouville quantum gravity in this regime. The contents of these two chapters are based on a joint work with Jian Ding. In Chapter 4, we derive some asymptotic estimates for effective resistances on a random network which is defined as follows. Given any gamma > 0 and for eta = {etav}v∈Z2 denoting a sample of the two-dimensional discrete Gaussian free field on Z2 pinned at the origin, we equip the edge ( u, v) with conductance egamma(etau + eta v). The metric structure of effective resistance plays a crucial role in our proof of the main result in Chapter 4. The primary motivation behind this metric is to understand the random walk on Z 2 where the edge (u, v) has weight egamma(etau + etav). Using the estimates from Chapter 4 we show in Chapter 5 that for almost every eta, this random walk is recurrent and that, with probability tending to 1 as T → infinity, the return probability at time 2T decays as T-1+o(1). In addition, we prove a version of subdiffusive behavior by showing that the expected exit time from a ball of radius N scales as Npsi(gamma)+o(1) with psi(gamma) > 2 for all gamma > 0. The contents of these chapters are based on a joint work with Marek Biskup and Jian Ding.

  14. Effect of thematic map misclassification on landscape multi-metric assessment.

    PubMed

    Kleindl, William J; Powell, Scott L; Hauer, F Richard

    2015-06-01

    Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

  15. Non-Schwarzschild black-hole metric in four dimensional higher derivative gravity: Analytical approximation

    NASA Astrophysics Data System (ADS)

    Kokkotas, K. D.; Konoplya, R. A.; Zhidenko, A.

    2017-09-01

    Higher derivative extensions of Einstein gravity are important within the string theory approach to gravity and as alternative and effective theories of gravity. H. Lü, A. Perkins, C. Pope, and K. Stelle [Phys. Rev. Lett. 114, 171601 (2015), 10.1103/PhysRevLett.114.171601] found a numerical solution describing a spherically symmetric non-Schwarzschild asymptotically flat black hole in Einstein gravity with added higher derivative terms. Using the general and quickly convergent parametrization in terms of the continued fractions, we represent this numerical solution in the analytical form, which is accurate not only near the event horizon or far from the black hole, but in the whole space. Thereby, the obtained analytical form of the metric allows one to study easily all the further properties of the black hole, such as thermodynamics, Hawking radiation, particle motion, accretion, perturbations, stability, quasinormal spectrum, etc. Thus, the found analytical approximate representation can serve in the same way as an exact solution.

  16. Killing vector fields in three dimensions: a method to solve massive gravity field equations

    NASA Astrophysics Data System (ADS)

    Gürses, Metin

    2010-10-01

    Killing vector fields in three dimensions play an important role in the construction of the related spacetime geometry. In this work we show that when a three-dimensional geometry admits a Killing vector field then the Ricci tensor of the geometry is determined in terms of the Killing vector field and its scalars. In this way we can generate all products and covariant derivatives at any order of the Ricci tensor. Using this property we give ways to solve the field equations of topologically massive gravity (TMG) and new massive gravity (NMG) introduced recently. In particular when the scalars of the Killing vector field (timelike, spacelike and null cases) are constants then all three-dimensional symmetric tensors of the geometry, the Ricci and Einstein tensors, their covariant derivatives at all orders, and their products of all orders are completely determined by the Killing vector field and the metric. Hence, the corresponding three-dimensional metrics are strong candidates for solving all higher derivative gravitational field equations in three dimensions.

  17. On Railroad Tank Car Puncture Performance: Part II - Estimating Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the second in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perfor...

  18. Application of Sigma Metrics Analysis for the Assessment and Modification of Quality Control Program in the Clinical Chemistry Laboratory of a Tertiary Care Hospital.

    PubMed

    Iqbal, Sahar; Mustansar, Tazeen

    2017-03-01

    Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.

  19. Quantification of normative ranges and baseline predictors of aortoventricular interface dimensions using multi-detector computed tomographic imaging in patients without aortic valve disease.

    PubMed

    Gooley, Robert P; Cameron, James D; Soon, Jennifer; Loi, Duncan; Chitale, Gauri; Syeda, Rifath; Meredith, Ian T

    2015-09-01

    Multidetector computed tomographic (MDCT) assessment of the aortoventricular interface has gained increased importance with the advent of minimally invasive treatment modalities for aortic and mitral valve disease. This has included a standardised technique of identifying a plane through the nadir of each coronary cusp, the basal plane, and taking further measurements in relation to this plane. Despite this there is no published data defining normal ranges for these aortoventricular metrics in a healthy cohort. This study seeks to quantify normative ranges for MDCT derived aortoventricular dimensions and evaluate baseline demographic and anthropomorphic associates of these measurements in a normal cohort. 250 consecutive patients undergoing MDCT coronary angiography were included. Aortoventricular dimensions at multiple levels of the aortoventricular interface were assessed and normative ranges quantified. Multivariate linear regression was performed to identify baseline predictors of each metric. The mean age was 59±12 years. The basal plane was eccentric (EI=0.22±0.06) while the left ventricular outflow tract was more eccentric (EI=0.32±0.06), with no correlation to gender, age or hypertension. Male gender, height and body mass index were consistent independent predictors of larger aortoventricular dimensions at all anatomical levels, while age was predictive of supra-annular measurements. Male gender, height and BMI are independent predictors of all aortoventricular dimensions while age predicts only supra-annular dimensions. Use of defined metrics such as the basal plane and formation of normative ranges for these metrics allows reference for clinical reporting and for future research studies by using a standardised measurement technique. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Librarians and Scientists: Combining Forces for Better Metrics

    NASA Astrophysics Data System (ADS)

    Rots, Arnold H.; Winkelman, Sherry

    2015-08-01

    Traditionally, observatory bibliographies mainly rely on two parameters derived from the carefully compiled lists of publications associated, in a well-defined way, with the observatories contribution to the advancement of science: numbers of articles and numbers of citations - in addition to the bibliographic metadata relating to those articles. The information that can be extracted from metrics based on these parameters is limited. This is a realization that is not just typical to astronomy and astrophysics, but one that is felt across many disciplines.Relating articles with very specific datasets allows us to join those datasets' metadata with the bibliographic metadata which opens a much richer field of information to mine for knowledge concerning the performance, not only of the observatory as a whole, but also its parts: instruments, types of observations, length of observations, etc. We have experimented extensively with such new metrics in the Chandra Data Archive in the Chandra X-ray Center at SAO.The linking of articles with individual datasets requires a level of scientific expertise that is usually not in the, otherwise extensive, skill set of the librarians, but is something that is crucial on the road to more informative bibliographic metrics.This talk is a plea for librarians and research scientists to join forces to make this happen. The added benefit of such a collaboration is a powerful research tool for navigating data and literature through a single interface.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center. It depends critically on the services provided by the ADS, which is funded by NASA Grant NNX12AG54G.

  1. Validation of a Quality Management Metric

    DTIC Science & Technology

    2000-09-01

    quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback

  2. Characterization of Visual Function, Interocular Variability and Progression Using Static Perimetry-Derived Metrics in RPGR-Associated Retinopathy.

    PubMed

    Tee, James J L; Yang, Yesa; Kalitzeos, Angelos; Webster, Andrew; Bainbridge, James; Weleber, Richard G; Michaelides, Michel

    2018-05-01

    To characterize bilateral visual function, interocular variability and progression by using static perimetry-derived volumetric and pointwise metrics in subjects with retinitis pigmentosa associated with mutations in the retinitis pigmentosa GTPase regulator (RPGR) gene. This was a prospective longitudinal observational study of 47 genetically confirmed subjects. Visual function was assessed with ETDRS and Pelli-Robson charts; and Octopus 900 static perimetry using a customized, radially oriented 185-point grid. Three-dimensional hill-of-vision topographic models were produced and interrogated with the Visual Field Modeling and Analysis software to obtain three volumetric metrics: VTotal, V30, and V5. These were analyzed together with Octopus mean sensitivity values. Interocular differences were assessed with the Bland-Altman method. Metric-specific exponential decline rates were calculated. Baseline symmetry was demonstrated by relative interocular difference values of 1% for VTotal and 8% with V30. Degree of symmetry varied between subjects and was quantified with the subject percentage interocular difference (SPID). SPID was 16% for VTotal and 17% for V30. Interocular symmetry in progression was greatest when quantified by VTotal and V30, with 73% and 64% of subjects possessing interocular rate differences smaller in magnitude than respective annual progression rates. Functional decline was evident with increasing age. An overall annual exponential decline of 6% was evident with both VTotal and V30. In general, good interocular symmetry exists; however, there was both variation between subjects and with the use of various metrics. Our findings will guide patient selection and design of RPGR treatment trials, and provide clinicians with specific prognostic information to offer patients affected by this condition.

  3. Biomechanical CT Metrics Are Associated With Patient Outcomes in COPD

    PubMed Central

    Bodduluri, Sandeep; Bhatt, Surya P; Hoffman, Eric A.; Newell, John D.; Martinez, Carlos H.; Dransfield, Mark T.; Han, Meilan K.; Reinhardt, Joseph M.

    2017-01-01

    Background Traditional metrics of lung disease such as those derived from spirometry and static single-volume CT images are used to explain respiratory morbidity in patients with chronic obstructive pulmonary disease (COPD), but are insufficient. We hypothesized that the mean Jacobian determinant, a measure of local lung expansion and contraction with respiration, would contribute independently to clinically relevant functional outcomes. Methods We applied image registration techniques to paired inspiratory-expiratory CT scans and derived the Jacobian determinant of the deformation field between the two lung volumes to map local volume change with respiration. We analyzed 490 participants with COPD with multivariable regression models to assess strengths of association between traditional CT metrics of disease and the Jacobian determinant with respiratory morbidity including dyspnea (mMRC), St Georges Respiratory Questionnaire (SGRQ) score, six-minute walk distance (6MWD), and the BODE index, as well as all-cause mortality. Results The Jacobian determinant was significantly associated with SGRQ (adjusted regression co-efficient β = −11.75,95%CI −21.6 to −1.7;p=0.020), and with 6MWD (β=321.15, 95%CI 134.1 to 508.1;p<0.001), independent of age, sex, race, body-mass-index, FEV1, smoking pack-years, CT emphysema, CT gas trapping, airway wall thickness, and CT scanner protocol. The mean Jacobian determinant was also independently associated with the BODE index (β= −0.41, 95%CI −0.80 to −0.02; p = 0.039), and mortality on follow-up (adjusted hazards ratio = 4.26, 95%CI = 0.93 to 19.23; p = 0.064). Conclusion Biomechanical metrics representing local lung expansion and contraction improve prediction of respiratory morbidity and mortality and offer additional prognostic information beyond traditional measures of lung function and static single-volume CT metrics. PMID:28044005

  4. A Proposal for a Variation on the Axioms of Classical Geometry

    ERIC Educational Resources Information Center

    Schellenberg, Bjorn

    2010-01-01

    A set of axioms for classical absolute geometry is proposed that is accessible to students new to axioms. The metric axioms adopted are the ruler axiom, triangle inequality and the bisector axiom. Angle measure is derived from distance, and all properties needed to establish a consistent system are derived. In particular, the SAS congruence…

  5. Taub-NUT Spacetime in the (A)dS/CFT and M-Theory [electronic resource

    NASA Astrophysics Data System (ADS)

    Clarkson, Richard

    In the following thesis, I will conduct a thermodynamic analysis of the Taub-NUT spacetime in various dimensions, as well as show uses for Taub-NUT and other Hyper-Kahler spacetimes. Thermodynamic analysis (by which I mean the calculation of the entropy and other thermodynamic quantities, and the analysis of these quantities) has in the past been done by use of background subtraction. The recent derivation of the (A)dS/CFT correspondences from String theory has allowed for easier and quicker analysis. I will use Taub-NUT space as a template to test these correspondences against the standard thermodynamic calculations (via the N?ether method), with (in the Taub-NUT-dS case especially) some very interesting results. There is also interest in obtaining metrics in eleven dimensions that can be reduced down to ten dimensional string theory metrics. Taub-NUT and other Hyper-Kahler metrics already possess the form to easily facilitate the Kaluza-Klein reduction, and embedding such metricsinto eleven dimensional metrics containing M2 or M5 branes produces metrics with interesting Dp-brane results.

  6. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  7. Multiscale entropy-based methods for heart rate variability complexity analysis

    NASA Astrophysics Data System (ADS)

    Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio

    2015-03-01

    Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.

  8. New t-gap insertion-deletion-like metrics for DNA hybridization thermodynamic modeling.

    PubMed

    D'yachkov, Arkadii G; Macula, Anthony J; Pogozelski, Wendy K; Renz, Thomas E; Rykov, Vyacheslav V; Torney, David C

    2006-05-01

    We discuss the concept of t-gap block isomorphic subsequences and use it to describe new abstract string metrics that are similar to the Levenshtein insertion-deletion metric. Some of the metrics that we define can be used to model a thermodynamic distance function on single-stranded DNA sequences. Our model captures a key aspect of the nearest neighbor thermodynamic model for hybridized DNA duplexes. One version of our metric gives the maximum number of stacked pairs of hydrogen bonded nucleotide base pairs that can be present in any secondary structure in a hybridized DNA duplex without pseudoknots. Thermodynamic distance functions are important components in the construction of DNA codes, and DNA codes are important components in biomolecular computing, nanotechnology, and other biotechnical applications that employ DNA hybridization assays. We show how our new distances can be calculated by using a dynamic programming method, and we derive a Varshamov-Gilbert-like lower bound on the size of some of codes using these distance functions as constraints. We also discuss software implementation of our DNA code design methods.

  9. MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BOLLEN, JOHAN; RODRIGUEZ, MARKO A.; VAN DE SOMPEL, HERBERT

    2007-01-30

    The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process.more » The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.« less

  10. PREDICTION METRICS FOR CHEMICAL DETECTION IN LONG-WAVE INFRARED HYPERSPECTRAL IMAGERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chilton, M.; Walsh, S.J.; Daly, D.S.

    2009-01-01

    Natural and man-made chemical processes generate gaseous plumes that may be detected by hyperspectral imaging, which produces a matrix of spectra affected by the chemical constituents of the plume, the atmosphere, the bounding background surface and instrument noise. A physics-based model of observed radiance shows that high chemical absorbance and low background emissivity result in a larger chemical signature. Using simulated hyperspectral imagery, this study investigated two metrics which exploited this relationship. The objective was to explore how well the chosen metrics predicted when a chemical would be more easily detected when comparing one background type to another. The twomore » predictor metrics correctly rank ordered the backgrounds for about 94% of the chemicals tested as compared to the background rank orders from Whitened Matched Filtering (a detection algorithm) of the simulated spectra. These results suggest that the metrics provide a reasonable summary of how the background emissivity and chemical absorbance interact to produce the at-sensor chemical signal. This study suggests that similarly effective predictors that account for more general physical conditions may be derived.« less

  11. Holographic Spherically Symmetric Metrics

    NASA Astrophysics Data System (ADS)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  12. The future of simulation technologies for complex cardiovascular procedures.

    PubMed

    Cates, Christopher U; Gallagher, Anthony G

    2012-09-01

    Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and then resolve to commit resources so as to continue to lead this revolution in physician training.

  13. Solar power satellite system definition study. Volume 5: Space transportation analysis, phase 3

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A small Heavy Lift Launch Vehicle (HLLV) for the Solar Power Satellites (SPS) System was analyzed. It is recommended that the small HLLV with a payload of 120 metric tons be adopted as the SPS launch vehicle. The reference HLLV, a shuttle-derived option with a payload of 400 metric tons, should serve as a backup and be examined further after initial flight experience. The electric orbit transfer vehicle should be retained as the reference orbit-to-orbit cargo system.

  14. Vegetation Phenology Metrics Derived from Temporally Smoothed and Gap-filled MODIS Data

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Morisette, Jeff; Wolfe, Robert; Esaias, Wayne; Gao, Feng; Ederer, Greg; Nightingale, Joanne; Nickeson, Jamie E.; Ma, Pete; Pedely, Jeff

    2012-01-01

    Smoothed and gap-filled VI provides a good base for estimating vegetation phenology metrics. The TIMESAT software was improved by incorporating the ancillary information from MODIS products. A simple assessment of the association between retrieved greenup dates and ground observations indicates satisfactory result from improved TIMESAT software. One application example shows that mapping Nectar Flow Phenology is tractable on a continental scale using hive weight and satellite vegetation data. The phenology data product is supporting more researches in ecology, climate change fields.

  15. Mobile Phone-Based Measures of Activity, Step Count, and Gait Speed: Results From a Study of Older Ambulatory Adults in a Naturalistic Setting

    PubMed Central

    Aung, Thawda; Whittington, Jackie; High, Robin R; Goulding, Evan H; Schenk, A Katrin

    2017-01-01

    Background Cellular mobile telephone technology shows much promise for delivering and evaluating healthcare interventions in cost-effective manners with minimal barriers to access. There is little data demonstrating that these devices can accurately measure clinically important aspects of individual functional status in naturalistic environments outside of the laboratory. Objective The objective of this study was to demonstrate that data derived from ubiquitous mobile phone technology, using algorithms developed and previously validated by our lab in a controlled setting, can be employed to continuously and noninvasively measure aspects of participant (subject) health status including step counts, gait speed, and activity level, in a naturalistic community setting. A second objective was to compare our mobile phone-based data against current standard survey-based gait instruments and clinical physical performance measures in order to determine whether they measured similar or independent constructs. Methods A total of 43 ambulatory, independently dwelling older adults were recruited from Nebraska Medicine, including 25 (58%, 25/43) healthy control individuals from our Engage Wellness Center and 18 (42%, 18/43) functionally impaired, cognitively intact individuals (who met at least 3 of 5 criteria for frailty) from our ambulatory Geriatrics Clinic. The following previously-validated surveys were obtained on study day 1: (1) Late Life Function and Disability Instrument (LLFDI); (2) Survey of Activities and Fear of Falling in the Elderly (SAFFE); (3) Patient Reported Outcomes Measurement Information System (PROMIS), short form version 1.0 Physical Function 10a (PROMIS-PF); and (4) PROMIS Global Health, short form version 1.1 (PROMIS-GH). In addition, clinical physical performance measurements of frailty (10 foot Get up and Go, 4 Meter walk, and Figure-of-8 Walk [F8W]) were also obtained. These metrics were compared to our mobile phone-based metrics collected from the participants in the community over a 24-hour period occurring within 1 week of the initial assessment. Results We identified statistically significant differences between functionally intact and frail participants in mobile phone-derived measures of percent activity (P=.002, t test), active versus inactive status (P=.02, t test), average step counts (P<.001, repeated measures analysis of variance [ANOVA]) and gait speed (P<.001, t test). In functionally intact individuals, the above mobile phone metrics assessed aspects of functional status independent (Bland-Altman and correlation analysis) of both survey- and/or performance battery-based functional measures. In contrast, in frail individuals, the above mobile phone metrics correlated with submeasures of both SAFFE and PROMIS-GH. Conclusions Continuous mobile phone-based measures of participant community activity and mobility strongly differentiate between persons with intact functional status and persons with a frailty phenotype. These measures assess dimensions of functional status independent of those measured using current validated questionnaires and physical performance assessments to identify functional compromise. Mobile phone-based gait measures may provide a more readily accessible and less-time consuming measure of gait, while further providing clinicians with longitudinal gait measures that are currently difficult to obtain. PMID:28974482

  16. Advanced Navigation Strategies For Asteroid Sample Return Missions

    NASA Technical Reports Server (NTRS)

    Getzandanner, K.; Bauman, J.; Williams, B.; Carpenter, J.

    2010-01-01

    Flyby and rendezvous missions to asteroids have been accomplished using navigation techniques derived from experience gained in planetary exploration. This paper presents analysis of advanced navigation techniques required to meet unique challenges for precision navigation to acquire a sample from an asteroid and return it to Earth. These techniques rely on tracking data types such as spacecraft-based laser ranging and optical landmark tracking in addition to the traditional Earth-based Deep Space Network radio metric tracking. A systematic study of navigation strategy, including the navigation event timeline and reduction in spacecraft-asteroid relative errors, has been performed using simulation and covariance analysis on a representative mission.

  17. Development of ecological indicator guilds for land management

    USGS Publications Warehouse

    Krzysik, A.J.; Balbach, H.E.; Duda, J.J.; Emlen, J.M.; Freeman, D.C.; Graham, J.H.; Kovacic, D.A.; Smith, L.M.; Zak, J.C.

    2005-01-01

    Agency land-use must be efficiently and cost-effectively monitored to assess conditions and trends in ecosystem processes and natural resources relevant to mission requirements and legal mandates. Ecological Indicators represent important land management tools for tracking ecological changes and preventing irreversible environmental damage in disturbed landscapes. The overall objective of the research was to develop both individual and integrated sets (i.e., statistically derived guilds) of Ecological Indicators to: quantify habitat conditions and trends, track and monitor ecological changes, provide early warning or threshold detection, and provide guidance for land managers. The derivation of Ecological Indicators was based on statistical criteria, ecosystem relevance, reliability and robustness, economy and ease of use for land managers, multi-scale performance, and stress response criteria. The basis for the development of statistically based Ecological Indicators was the identification of ecosystem metrics that analytically tracked a landscape disturbance gradient.

  18. Toward a comprehensive hybrid physical-virtual reality simulator of peripheral anesthesia with ultrasound and neurostimulator guidance.

    PubMed

    Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A

    2011-01-01

    We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.

  19. Acoustic Radiation Force Elasticity Imaging in Diagnostic Ultrasound

    PubMed Central

    Doherty, Joshua R.; Trahey, Gregg E.; Nightingale, Kathryn R.; Palmeri, Mark L.

    2013-01-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo, elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed non-invasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods. PMID:23549529

  20. Acoustic radiation force elasticity imaging in diagnostic ultrasound.

    PubMed

    Doherty, Joshua R; Trahey, Gregg E; Nightingale, Kathryn R; Palmeri, Mark L

    2013-04-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo; elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed noninvasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods.

Top