Sample records for algorithm performance metrics

  1. Constrained Metric Learning by Permutation Inducing Isometries.

    PubMed

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  2. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale

    PubMed Central

    Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786

  3. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  4. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  5. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  6. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    PubMed

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  7. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  8. Performance of the METRIC model in estimating evapotranspiration fluxes over an irrigated field in Saudi Arabia using Landsat-8 images

    NASA Astrophysics Data System (ADS)

    Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.

    2017-12-01

    Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.

  9. Assessment of various supervised learning algorithms using different performance metrics

    NASA Astrophysics Data System (ADS)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  10. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  11. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  12. Voice based gender classification using machine learning

    NASA Astrophysics Data System (ADS)

    Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.

    2017-11-01

    Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.

  13. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  14. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  15. Performance metrics for the assessment of satellite data products: an ocean color case study

    PubMed Central

    Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy

    2018-01-01

    Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296

  16. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  17. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays

    PubMed Central

    Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.

    2016-01-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567

  18. Ensemble Clustering Classification compete SVM and One-Class classifiers applied on plant microRNAs Data.

    PubMed

    Yousef, Malik; Khalifa, Waleed; AbedAllah, Loai

    2016-12-22

    The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that ECkNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.

  19. Ensemble Clustering Classification Applied to Competing SVM and One-Class Classifiers Exemplified by Plant MicroRNAs Data.

    PubMed

    Yousef, Malik; Khalifa, Waleed; AbdAllah, Loai

    2016-12-01

    The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.

  20. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  1. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  2. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays.

    PubMed

    Goodman, Corey W; Major, Heather J; Walls, William D; Sheffield, Val C; Casavant, Thomas L; Darbro, Benjamin W

    2015-04-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi

    2017-11-01

    K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).

  4. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  5. Social Media: Menagerie of Metrics

    DTIC Science & Technology

    2010-01-27

    intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm . An EA...Cloning - 22 Animals were cloned to date; genetic algorithms can help prediction (e.g. “elitism” - attempts to ensure selection by including performers...28, 2010 Evolutionary Algorithm • Evolutionary algorithm From Wikipedia, the free encyclopedia Artificial intelligence portal In artificial

  6. Application of the EM algorithm to radiographic images.

    PubMed

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  7. Uncooperative target-in-the-loop performance with backscattered speckle-field effects

    NASA Astrophysics Data System (ADS)

    Kansky, Jan E.; Murphy, Daniel V.

    2007-09-01

    Systems utilizing target-in-the-loop (TIL) techniques for adaptive optics phase compensation rely on a metric sensor to perform a hill climbing algorithm that maximizes the far-field Strehl ratio. In uncooperative TIL, the metric signal is derived from the light backscattered from a target. In cases where the target is illuminated with a laser with suffciently long coherence length, the potential exists for the validity of the metric sensor to be compromised by speckle-field effects. We report experimental results from a scaled laboratory designed to evaluate TIL performance in atmospheric turbulence and thermal blooming conditions where the metric sensors are influenced by varying degrees of backscatter speckle. We compare performance of several TIL configurations and metrics for cases with static speckle, and for cases with speckle fluctuations within the frequency range that the TIL system operates. The roles of metric sensor filtering and system bandwidth are discussed.

  8. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  9. File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering

    DTIC Science & Technology

    2013-03-21

    33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change

  10. Throughput analysis for the National Airspace System

    NASA Astrophysics Data System (ADS)

    Sureshkumar, Chandrasekar

    The United States National Airspace System (NAS) network performance is currently measured using a variety of metrics based on delay. Developments in the fields of wireless communication, manufacturing and other modes of transportation like road, freight, etc. have explored various metrics that complement the delay metric. In this work, we develop a throughput concept for both the terminal and en-route phases of flight inspired by studies in the above areas and explore the applications of throughput metrics for the en-route airspace of the NAS. These metrics can be applied to the NAS performance at each hierarchical level—the sector, center, regional and national and will consist of multiple layers of networks with the bottom level comprising the traffic pattern modelled as a network of individual sectors acting as nodes. This hierarchical approach is especially suited for executive level decision making as it gives an overall picture of not just the inefficiencies but also the aspects where the NAS has performed well in a given situation from which specific information about the effects of a policy change on the NAS performance at each level can be determined. These metrics are further validated with real traffic data using the Future Air Traffic Management Concepts Evaluation Tool (FACET) for three en-route sectors and an Air Route Traffic Control Center (ARTCC). Further, this work proposes a framework to compute the minimum makespan and the capacity of a runway system in any configuration. Towards this, an algorithm for optimal arrival and departure flight sequencing is proposed. The proposed algorithm is based on a branch-and-bound technique and allows for the efficient computation of the best runway assignment and sequencing of arrival and departure operations that minimize the makespan at a given airport. The lower and upper bounds of the cost of each branch for the best first search in the branch-and-bound algorithm are computed based on the minimum separation standards between arrival and departure operations set by the Federal Aviation Administration. The optimal objective value is mathematically proved to lie between these bounds and the algorithm uses these bounds to efficiently find promising branches and discard all others and terminate with atleast one sequence with the minimal makespan. The proposed algorithm is analyzed and validated through real traffic operations data at the Hartsfield-Jackson Atlanta international airport.

  11. Orbit Clustering Based on Transfer Cost

    NASA Technical Reports Server (NTRS)

    Gustafson, Eric D.; Arrieta-Camacho, Juan J.; Petropoulos, Anastassios E.

    2013-01-01

    We propose using cluster analysis to perform quick screening for combinatorial global optimization problems. The key missing component currently preventing cluster analysis from use in this context is the lack of a useable metric function that defines the cost to transfer between two orbits. We study several proposed metrics and clustering algorithms, including k-means and the expectation maximization algorithm. We also show that proven heuristic methods such as the Q-law can be modified to work with cluster analysis.

  12. Statistical issues in the comparison of quantitative imaging biomarker algorithms using pulmonary nodule volume as an example.

    PubMed

    Obuchowski, Nancy A; Barnhart, Huiman X; Buckler, Andrew J; Pennello, Gene; Wang, Xiao-Feng; Kalpathy-Cramer, Jayashree; Kim, Hyun J Grace; Reeves, Anthony P

    2015-02-01

    Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients' disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms' bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms' performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    PubMed

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  14. Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol.

    PubMed

    Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing

    2009-02-01

    Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.

  15. Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.

    PubMed

    Labra, Nicole; Guevara, Pamela; Duclap, Delphine; Houenou, Josselin; Poupon, Cyril; Mangin, Jean-François; Figueroa, Miguel

    2017-01-01

    This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets.

  16. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  17. New Decentralized Algorithms for Spacecraft Formation Control Based on a Cyclic Approach

    DTIC Science & Technology

    2010-06-01

    space framework. As metric of performance, a common quadratic norm that weights the performance error and the control effort is traded with the cost...R = DTD, then the metric of interest is (’J)",,, the square of the 2-norm from input w to output z. Given a system G with state space description A ... spaced logarithmic spiral formation. These results are derived for

  18. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2017-04-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  19. Quality Assurance Assessment of Diagnostic and Radiation Therapy–Simulation CT Image Registration for Head and Neck Radiation Therapy: Anatomic Region of Interest–based Comparison of Rigid and Deformable Algorithms

    PubMed Central

    Mohamed, Abdallah S. R.; Ruangskul, Manee-Naad; Awan, Musaddiq J.; Baron, Charles A.; Kalpathy-Cramer, Jayashree; Castillo, Richard; Castillo, Edward; Guerrero, Thomas M.; Kocak-Uzel, Esengul; Yang, Jinzhong; Court, Laurence E.; Kantor, Michael E.; Gunn, G. Brandon; Colen, Rivka R.; Frank, Steven J.; Garden, Adam S.; Rosenthal, David I.

    2015-01-01

    Purpose To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy–simulation computed tomography (CT) with diagnostic CT coregistration. Materials and Methods Radiation therapy–simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison. Results A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness. Conclusion Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation. © RSNA, 2014 Online supplemental material is available for this article. PMID:25380454

  20. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  1. Methods for investigating the local spatial anisotropy and the preferred orientation of cones in adaptive optics retinal images

    PubMed Central

    Cooper, Robert F.; Lombardo, Marco; Carroll, Joseph; Sloan, Kenneth R.; Lombardo, Giuseppe

    2016-01-01

    The ability to non-invasively image the cone photoreceptor mosaic holds significant potential as a diagnostic for retinal disease. Central to the realization of this potential is the development of sensitive metrics for characterizing the organization of the mosaic. Here we evaluated previously-described (Pum et al., 1990) and newly-developed (Fourier- and Radon-based) methods of measuring cone orientation in both simulated and real images of the parafoveal cone mosaic. The proposed algorithms correlated well across both simulated and real mosaics, suggesting that each algorithm would provide an accurate description of individual photoreceptor orientation. Despite the high agreement between algorithms, each performed differently in response to image intensity variation and cone coordinate jitter. The integration property of the Fourier transform allowed the Fourier-based method to be resistant to cone coordinate jitter and perform the most robustly of all three algorithms. Conversely, when there is good image quality but unreliable cone identification, the Radon algorithm performed best. Finally, in cases where both the image and cone coordinate reliability was excellent, the method of Pum et al. (1990) performed best. These descriptors are complementary to conventional descriptive metrics of the cone mosaic, such as cell density and spacing, and have the potential to aid in the detection of photoreceptor pathology. PMID:27484961

  2. Flight Validation of a Metrics Driven L(sub 1) Adaptive Control

    NASA Technical Reports Server (NTRS)

    Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.

    2008-01-01

    The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.

  3. Evaluation of developmental metrics for utilization in a pediatric advanced automatic crash notification algorithm.

    PubMed

    Doud, Andrea N; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Petty, John; Stitzel, Joel D

    2016-01-01

    Appropriate treatment at designated trauma centers (TCs) improves outcomes among injured children after motor vehicle crashes (MVCs). Advanced Automatic Crash Notification (AACN) has shown promise in improving triage to appropriate TCs. Pediatric-specific AACN algorithms have not yet been created. To create such an algorithm, it will be necessary to include some metric of development (age, height, or weight) as a covariate in the injury risk algorithm. This study sought to determine which marker of development should serve as a covariate in such an algorithm and to quantify injury risk at different levels of this metric. A retrospective review of occupants age < 19 years within the MVC data set NASS-CDS 2000-2011 was performed. R(2) values of logistic regression models using age, height, or weight to predict 18 key injury types were compared to determine which metric should be used as a covariate in a pediatric AACN algorithm. Clinical judgment, literature review, and chi-square analysis were used to create groupings of the chosen metric that would discriminate injury patterns. Adjusted odds of particular injury types at the different levels of this metric were calculated from logistic regression while controlling for gender, vehicle velocity change (delta V), belted status (optimal, suboptimal, or unrestrained), and crash mode (rollover, rear, frontal, near-side, or far-side). NASS-CDS analysis produced 11,541 occupants age < 19 years with nonmissing data. Age, height, and weight were correlated with one another and with injury patterns. Age demonstrated the best predictive power in injury patterns and was categorized into bins of 0-4 years, 5-9 years, 10-14 years, and 15-18 years. Age was a significant predictor of all 18 injury types evaluated even when controlling for all other confounders and when controlling for age- and gender-specific body mass index (BMI) classifications. Adjusted odds of key injury types with respect to these age categorizations revealed that younger children were at increased odds of sustaining Abbreviated Injury Scale (AIS) 2+ and 3+ head injuries and AIS 3+ spinal injuries, whereas older children were at increased odds of sustaining thoracic fractures, AIS 3+ abdominal injuries, and AIS 2+ upper and lower extremity injuries. The injury patterns observed across developmental metrics in this study mirror those previously described among children with blunt trauma. This study identifies age as the metric best suited for use in a pediatric AACN algorithm and utilizes 12 years of data to provide quantifiable risks of particular injuries at different levels of this metric. This risk quantification will have important predictive purposes in a pediatric-specific AACN algorithm.

  4. A globally optimal k-anonymity method for the de-identification of health data.

    PubMed

    El Emam, Khaled; Dankar, Fida Kamal; Issa, Romeo; Jonker, Elizabeth; Amyot, Daniel; Cogo, Elise; Corriveau, Jean-Pierre; Walker, Mark; Chowdhury, Sadrul; Vaillancourt, Regis; Roffey, Tyson; Bottomley, Jim

    2009-01-01

    Explicit patient consent requirements in privacy laws can have a negative impact on health research, leading to selection bias and reduced recruitment. Often legislative requirements to obtain consent are waived if the information collected or disclosed is de-identified. The authors developed and empirically evaluated a new globally optimal de-identification algorithm that satisfies the k-anonymity criterion and that is suitable for health datasets. Authors compared OLA (Optimal Lattice Anonymization) empirically to three existing k-anonymity algorithms, Datafly, Samarati, and Incognito, on six public, hospital, and registry datasets for different values of k and suppression limits. Measurement Three information loss metrics were used for the comparison: precision, discernability metric, and non-uniform entropy. Each algorithm's performance speed was also evaluated. The Datafly and Samarati algorithms had higher information loss than OLA and Incognito; OLA was consistently faster than Incognito in finding the globally optimal de-identification solution. For the de-identification of health datasets, OLA is an improvement on existing k-anonymity algorithms in terms of information loss and performance.

  5. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  6. Analysis of Community Detection Algorithms for Large Scale Cyber Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee

    The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The nextmore » section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.« less

  7. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems

    NASA Astrophysics Data System (ADS)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.

    2018-01-01

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  8. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    PubMed

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  9. Statistical Issues in the Comparison of Quantitative Imaging Biomarker Algorithms using Pulmonary Nodule Volume as an Example

    PubMed Central

    2014-01-01

    Quantitative imaging biomarkers (QIBs) are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure QIBs have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for QIB studies. PMID:24919828

  10. Dehazed Image Quality Assessment by Haze-Line Theory

    NASA Astrophysics Data System (ADS)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  11. New metric for optimizing Continuous Loop Averaging Deconvolution (CLAD) sequences under the 1/f noise model

    PubMed Central

    Peng, Xian; Yuan, Han; Chen, Wufan; Ding, Lei

    2017-01-01

    Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic. PMID:28414803

  12. Evaluation of BLAST-based edge-weighting metrics used for homology inference with the Markov Clustering algorithm.

    PubMed

    Gibbons, Theodore R; Mount, Stephen M; Cooper, Endymion D; Delwiche, Charles F

    2015-07-10

    Clustering protein sequences according to inferred homology is a fundamental step in the analysis of many large data sets. Since the publication of the Markov Clustering (MCL) algorithm in 2002, it has been the centerpiece of several popular applications. Each of these approaches generates an undirected graph that represents sequences as nodes connected to each other by edges weighted with a BLAST-based metric. MCL is then used to infer clusters of homologous proteins by analyzing these graphs. The various approaches differ only by how they weight the edges, yet there has been very little direct examination of the relative performance of alternative edge-weighting metrics. This study compares the performance of four BLAST-based edge-weighting metrics: the bit score, bit score ratio (BSR), bit score over anchored length (BAL), and negative common log of the expectation value (NLE). Performance is tested using the Extended CEGMA KOGs (ECK) database, which we introduce here. All metrics performed similarly when analyzing full-length sequences, but dramatic differences emerged as progressively larger fractions of the test sequences were split into fragments. The BSR and BAL successfully rescued subsets of clusters by strengthening certain types of alignments between fragmented sequences, but also shifted the largest correct scores down near the range of scores generated from spurious alignments. This penalty outweighed the benefits in most test cases, and was greatly exacerbated by increasing the MCL inflation parameter, making these metrics less robust than the bit score or the more popular NLE. Notably, the bit score performed as well or better than the other three metrics in all scenarios. The results provide a strong case for use of the bit score, which appears to offer equivalent or superior performance to the more popular NLE. The insight that MCL-based clustering methods can be improved using a more tractable edge-weighting metric will greatly simplify future implementations. We demonstrate this with our own minimalist Python implementation: Porthos, which uses only standard libraries and can process a graph with 25 m + edges connecting the 60 k + KOG sequences in half a minute using less than half a gigabyte of memory.

  13. Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei

    Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.

  14. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.

  15. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  16. Validation of neural spike sorting algorithms without ground-truth information.

    PubMed

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A comparative study of DIGNET, average, complete, single hierarchical and k-means clustering algorithms in 2D face image recognition

    NASA Astrophysics Data System (ADS)

    Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.

    2014-06-01

    The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the database where each vector component represents the likelihood of participation of a given face to each cluster. This vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics. The first stage in this system involves a clustering method which evaluates and compares the clustering results of five different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the best strategy for each data collection. In this paper we present the comparative performance of clustering results of DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D samples, and on actual face images from various databases, using four different standard metrics. These metrics are the silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results), then it is necessary for the clustering results to be verified by the other algorithms.

  18. Ranking of stopping criteria for log domain diffeomorphic demons application in clinical radiation therapy.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2011-01-01

    Deformable Image Registration is a complex optimization algorithm with the goal of modeling a non-rigid transformation between two images. A crucial issue in this field is guaranteeing the user a robust but computationally reasonable algorithm. We rank the performances of four stopping criteria and six stopping value computation strategies for a log domain deformable registration. The stopping criteria we test are: (a) velocity field update magnitude, (b) vector field Jacobian, (c) mean squared error, and (d) harmonic energy. Experiments demonstrate that comparing the metric value over the last three iterations with the metric minimum of between four and six previous iterations is a robust and appropriate strategy. The harmonic energy and vector field update magnitude metrics give the best results in terms of robustness and speed of convergence.

  19. Enhanced Approximate Nearest Neighbor via Local Area Focused Search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzales, Antonio; Blazier, Nicholas Paul

    Approximate Nearest Neighbor (ANN) algorithms are increasingly important in machine learning, data mining, and image processing applications. There is a large family of space- partitioning ANN algorithms, such as randomized KD-Trees, that work well in practice but are limited by an exponential increase in similarity comparisons required to optimize recall. Additionally, they only support a small set of similarity metrics. We present Local Area Fo- cused Search (LAFS), a method that enhances the way queries are performed using an existing ANN index. Instead of a single query, LAFS performs a number of smaller (fewer similarity comparisons) queries and focuses onmore » a local neighborhood which is refined as candidates are identified. We show that our technique improves performance on several well known datasets and is easily extended to general similarity metrics using kernel projection techniques.« less

  20. Image navigation and registration performance assessment tool set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Astrophysics Data System (ADS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-05-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  1. Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  2. Image Navigation and Registration Performance Assessment Tool Set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  3. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  4. Joint learning of labels and distance metric.

    PubMed

    Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng

    2010-06-01

    Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.

  5. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    ERIC Educational Resources Information Center

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.

    2015-01-01

    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  6. Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM

    NASA Technical Reports Server (NTRS)

    Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip

    2017-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.

  7. Citizen science: A new perspective to advance spatial pattern evaluation in hydrology.

    PubMed

    Koch, Julian; Stisen, Simon

    2017-01-01

    Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics.

  8. An enhanced TIMESAT algorithm for estimating vegetation phenology metrics from MODIS data

    USGS Publications Warehouse

    Tan, B.; Morisette, J.T.; Wolfe, R.E.; Gao, F.; Ederer, G.A.; Nightingale, J.; Pedelty, J.A.

    2011-01-01

    An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates. ?? 2010 IEEE.

  9. An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.

    2012-01-01

    An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.

  10. An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN

    PubMed Central

    Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379

  11. An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.

    PubMed

    Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie

    2014-01-01

    WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.

  12. Oscillatory neural network for pattern recognition: trajectory based classification and supervised learning.

    PubMed

    Miller, Vonda H; Jansen, Ben H

    2008-12-01

    Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.

  13. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  14. Stochastic HKMDHE: A multi-objective contrast enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.

  15. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  16. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    NASA Astrophysics Data System (ADS)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by the conclusions drawn from the simulations.

  17. Calabi-Yau metrics for quotients and complete intersections

    DOE PAGES

    Braun, Volker; Brelidze, Tamaz; Douglas, Michael R.; ...

    2008-05-22

    We extend previous computations of Calabi-Yau metrics on projective hypersurfaces to free quotients, complete intersections, and free quotients of complete intersections. In particular, we construct these metrics on generic quintics, four-generation quotients of the quintic, Schoen Calabi-Yau complete intersections and the quotient of a Schoen manifold with Z₃ x Z₃ fundamental group that was previously used to construct a heterotic standard model. Various numerical investigations into the dependence of Donaldson's algorithm on the integration scheme, as well as on the Kähler and complex structure moduli, are also performed.

  18. A scalable kernel-based semisupervised metric learning algorithm with out-of-sample generalization ability.

    PubMed

    Yeung, Dit-Yan; Chang, Hong; Dai, Guang

    2008-11-01

    In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.

  19. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    PubMed

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p < 0.001, r = 0.73), LOS (p = 0.05, r = 0.52), and Foley duration (p < 0.001, r = 0.45). The five most relevant APMs, adopted by the RF-50 algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  20. Defining and quantifying users' mental Imagery-based BCI skills: a first step.

    PubMed

    Lotte, Fabien; Jeunet, Camille

    2018-05-17

    While promising for many applications, Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are still scarcely used outside laboratories, due to a poor reliability. It is thus necessary to study and fix this reliability issue. Doing so requires the use of appropriate reliability metrics to quantify both the classification algorithm and the BCI user's performances. So far, Classification Accuracy (CA) is the typical metric used for both aspects. However, we argue in this paper that CA is a poor metric to study BCI users' skills. Here, we propose a definition and new metrics to quantify such BCI skills for Mental Imagery (MI) BCIs, independently of any classification algorithm. Approach: We first show in this paper that CA is notably unspecific, discrete, training data and classifier dependent, and as such may not always reflect successful self-modulation of EEG patterns by the user. We then propose a definition of MI-BCI skills that reflects how well the user can self-modulate EEG patterns, and thus how well he could control an MI-BCI. Finally, we propose new performance metrics, classDis, restDist and classStab that specifically measure how distinct and stable the EEG patterns produced by the user are, independently of any classifier. Main results: By re-analyzing EEG data sets with such new metrics, we indeed confirmed that CA may hide some increase in MI-BCI skills or hide the user inability to self-modulate a given EEG pattern. On the other hand, our new metrics could reveal such skill improvements as well as identify when a mental task performed by a user was no different than rest EEG. Significance: Our results showed that when studying MI-BCI users' skills, CA should be used with care, and complemented with metrics such as the new ones proposed. Our results also stressed the need to redefine BCI user training by considering the different BCI subskills and their measures. To promote the complementary use of our new metrics, we provide the Matlab code to compute them for free and open-source. © 2018 IOP Publishing Ltd.

  1. Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multi-atlas (MA) approach.

    PubMed

    Lavdas, Ioannis; Glocker, Ben; Kamnitsas, Konstantinos; Rueckert, Daniel; Mair, Henrietta; Sandhu, Amandeep; Taylor, Stuart A; Aboagye, Eric O; Rockall, Andrea G

    2017-10-01

    As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0.22, RE = 0.70 ± 0.34, PR = 0.77 ± 0.15. Mean surface distance metrics for all the segmented structures were: CFs: ASD = 13.5 ± 11.3 mm, RMSSD = 34.6 ± 37.6 mm and HD = 185.7 ± 194.0 mm, CNNs; ASD = 5.48 ± 4.84 mm, RMSSD = 17.0 ± 13.3 mm and HD = 199.0 ± 101.2 mm, MA: ASD = 4.22 ± 2.42 mm, RMSSD = 6.13 ± 2.55 mm, and HD = 38.9 ± 28.9 mm. The pooled performance of CFs improved when all imaging combinations (T2w + T1w + DWI) were used as input, while the performance of CNNs deteriorated, but in neither case, significantly. CNNs with T2w images as input, performed significantly better than CFs with all imaging combinations as input for all anatomical labels, except for the bladder. Three state-of-the-art algorithms were developed and used to automatically segment major organs and bones in whole body MRI; good agreement to manual segmentations performed by clinical MRI experts was observed. CNNs perform favorably, when using T2w volumes as input. Using multimodal MRI data as input to CNNs did not improve the segmentation performance. © 2017 American Association of Physicists in Medicine.

  2. Multi-linear model set design based on the nonlinearity measure and H-gap metric.

    PubMed

    Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali

    2017-05-01

    This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.

  4. Citizen science: A new perspective to advance spatial pattern evaluation in hydrology

    PubMed Central

    Stisen, Simon

    2017-01-01

    Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics. PMID:28558050

  5. Evaluation of Automatically Quantified Foveal Avascular Zone Metrics for Diagnosis of Diabetic Retinopathy Using Optical Coherence Tomography Angiography.

    PubMed

    Lu, Yansha; Simonett, Joseph M; Wang, Jie; Zhang, Miao; Hwang, Thomas; Hagag, Ahmed M; Huang, David; Li, Dengwang; Jia, Yali

    2018-05-01

    To describe an automated algorithm to quantify the foveal avascular zone (FAZ), using optical coherence tomography angiography (OCTA), and to compare its performance for diagnosis of diabetic retinopathy (DR) and association with best-corrected visual acuity (BCVA) to that of extrafoveal avascular area (EAA). We obtained 3 × 3-mm macular OCTA scans in diabetic patients with various levels of DR and healthy controls. An algorithm based on a generalized gradient vector flow (GGVF) snake model detected the FAZ, and metrics assessing FAZ size and irregularity were calculated. We compared the automated FAZ segmentation to manual delineation and tested the within-visit repeatability of FAZ metrics. The correlations of two conventional FAZ metrics, two novel FAZ metrics, and EAA with DR severity and BCVA, as determined by Early Treatment Diabetic Retinopathy Study (ETDRS) charts, were assessed. Sixty-six eyes from 66 diabetic patients and 19 control eyes from 19 healthy participants were included. The agreement between manual and automated FAZ delineation had a Jaccard index > 0.82, and the repeatability of automated FAZ detection was excellent in eyes at all levels of DR severity. FAZ metrics that incorporated both FAZ size and shape irregularity had the strongest correlation with clinical DR grade and BCVA. Of all the tested OCTA metrics, EAA had the greatest sensitivity in differentiating diabetic eyes without clinical evidence of retinopathy, mild to moderate nonproliferative DR (NPDR), and severe NPDR to proliferative DR from healthy controls. The GGVF snake algorithm tested in this study can accurately and reliably detect the FAZ, using OCTA data at all DR severity grades, and may be used to obtain clinically useful information from OCTA data regarding macular ischemia in patients with diabetes. While FAZ metrics can provide clinically useful information regarding macular ischemia, and possibly visual acuity potential, EAA measurements may be a better biomarker for DR.

  6. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  7. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  8. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  9. Role of quality of service metrics in visual target acquisition and tracking in resource constrained environments

    NASA Astrophysics Data System (ADS)

    Anderson, Monica; David, Phillip

    2007-04-01

    Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.

  10. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    PubMed

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.

  11. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  12. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  13. Categorization of hyperspectral information (HSI) based on the distribution of spectra in hyperspace

    NASA Astrophysics Data System (ADS)

    Resmini, Ronald G.

    2003-09-01

    Hyperspectral information (HSI) data are commonly categorized by a description of the dominant physical geographic background captured in the image cube. In other words, HSI categorization is commonly based on a cursory, visual assessment of whether the data are of desert, forest, urban, littoral, jungle, alpine, etc., terrains. Additionally, often the design of HSI collection experiments is based on the acquisition of data of the various backgrounds or of objects of interest within the various terrain types. These data are for assessing and quantifying algorithm performance as well as for algorithm development activities. Here, results of an investigation into the validity of the backgrounds-driven mode of characterizing the diversity of hyperspectral data are presented. HSI data are described quantitatively, in the space where most algorithms operate: n-dimensional (n-D) hyperspace, where n is the number of bands in an HSI data cube. Nineteen metrics designed to probe hyperspace are applied to 14 HYDICE HSI data cubes that represent nine different backgrounds. Each of the 14 sets (one for each HYDICE cube) of 19 metric values was analyzed for clustering. With the present set of data and metrics, there is no clear, unambiguous break-out of metrics based on the nine different geographic backgrounds. The break-outs clump seemingly unrelated data types together; e.g., littoral and urban/residential. Most metrics are normally distributed and indicate no clustering; one metric is one outlier away from normal (i.e., two clusters); and five are comprised of two distributions (i.e., two clusters). Overall, there are three different break-outs that do not correspond to conventional background categories. Implications of these preliminary results are discussed as are recommendations for future work.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, makingmore » it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. Results: There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. Conclusions: The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.« less

  15. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Elgindy, Tarek; Hodge, Brian S

    A framework for assessing the performance of short-term solar forecasting is presented in conjunction with a range of numerical results using global horizontal irradiation (GHI) from the open-source Surface Radiation Budget (SURFRAD) data network. A suite of popular machine learning algorithms is compared according to a set of statistically distinct metrics and benchmarked against the persistence-of-cloudiness forecast and a cloud motion forecast. Results show significant improvement compared to the benchmarks with trade-offs among the machine learning algorithms depending on the desired error metric. Training inputs include time series observations of GHI for a history of years, historical weather and atmosphericmore » measurements, and corresponding date and time stamps such that training sensitivities might be inferred. Prediction outputs are GHI forecasts for 1, 2, 3, and 4 hours ahead of the issue time, and they are made for every month of the year for 7 locations. Photovoltaic power and energy outputs can then be made using the solar forecasts to better understand power system impacts.« less

  16. SU-F-J-84: Comparison of Quantitative Deformable Image Registration Evaluation Tools: Application to Prostate IGART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogan, N; Weiss, E; Sleeman, W

    Purpose: Errors in displacement vector fields (DVFs) generated by Deformable Image Registration (DIR) algorithms can give rise to significant uncertainties in contour propagation and dose accumulation in Image-Guided Adaptive Radiotherapy (IGART). The purpose of this work is to assess the accuracy of two DIR algorithms using a variety of quality metrics for prostate IGART. Methods: Pelvic CT images were selected from an anonymized database of nineteen prostate patients who underwent 8–12 serial scans during radiotherapy. Prostate, bladder, and rectum were contoured on 34 image-sets for three patients by the same physician. The planning CT was deformably-registered to daily CT usingmore » three variants of the Small deformation Inverse Consistent Linear Elastic (SICLE) algorithm: Grayscale-driven (G), Contour-driven (C, which utilizes segmented structures to drive DIR), combined (G+C); and also grayscale ITK demons (Gd). The accuracy of G, C, G+C SICLE and Gd registrations were evaluated using a new metric Edge Gradient Distance to Agreement (EGDTA) and other commonly-used metrics such as Pearson Correlation Coefficient (PCC), Dice Similarity Index (DSI) and Hausdorff Distance (HD). Results: C and G+C demonstrated much better performance at organ boundaries, revealing the lowest HD and highest DSI, in prostate, bladder and rectum. G+C demonstrated the lowest mean EGDTA (1.14 mm), which corresponds to highest registration quality, compared to G and C DVFs (1.16 and 2.34 mm). However, demons DIR showed the best overall performance, revealing lowest EGDTA (0.73 mm) and highest PCC (0.85). Conclusion: As expected, both C- and C+G SICLE more accurately reproduce manually-contoured target datasets than G-SICLE or Gd using HD and DSI metrics. In general, the Gd appears to have difficulty reproducing large daily position and shape changes in the rectum and bladder. However, Gd outperforms SICLE in terms of EGDTA and PCC metrics, possibly at the expense of topological quality of the estimated DVFs.« less

  17. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  18. Development of Type 2 Diabetes Mellitus Phenotyping Framework Using Expert Knowledge and Machine Learning Approach.

    PubMed

    Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko

    2017-07-01

    Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.

  19. Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks

    NASA Astrophysics Data System (ADS)

    Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.

    2017-07-01

    The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.

  20. Breast mass detection in tomosynthesis projection images using information-theoretic similarity measures

    NASA Astrophysics Data System (ADS)

    Singh, Swatee; Tourassi, Georgia D.; Lo, Joseph Y.

    2007-03-01

    The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an algorithm based on information theory where similarity metrics were calculated using knowledge databases consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The best performance was achieved using the joint entropy similarity metric, resulting in ROC A z of 0.87 +/- 0.01. As a whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone, and 91% sensitivity when using tomosynthesis alone.

  1. A flooding algorithm for multirobot exploration.

    PubMed

    Cabrera-Mora, Flavio; Xiao, Jizhong

    2012-06-01

    In this paper, we present a multirobot exploration algorithm that aims at reducing the exploration time and to minimize the overall traverse distance of the robots by coordinating the movement of the robots performing the exploration. Modeling the environment as a tree, we consider a coordination model that restricts the number of robots allowed to traverse an edge and to enter a vertex during each step. This coordination is achieved in a decentralized manner by the robots using a set of active landmarks that are dropped by them at explored vertices. We mathematically analyze the algorithm on trees, obtaining its main properties and specifying its bounds on the exploration time. We also define three metrics of performance for multirobot algorithms. We simulate and compare the performance of this new algorithm with those of our multirobot depth first search (MR-DFS) approach presented in our recent paper and classic single-robot DFS.

  2. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  3. Relational Agreement Measures for Similarity Searching of Cheminformatic Data Sets.

    PubMed

    Rivera-Borroto, Oscar Miguel; García-de la Vega, José Manuel; Marrero-Ponce, Yovani; Grau, Ricardo

    2016-01-01

    Research on similarity searching of cheminformatic data sets has been focused on similarity measures using fingerprints. However, nominal scales are the least informative of all metric scales, increasing the tied similarity scores, and decreasing the effectivity of the retrieval engines. Tanimoto's coefficient has been claimed to be the most prominent measure for this task. Nevertheless, this field is far from being exhausted since the computer science no free lunch theorem predicts that "no similarity measure has overall superiority over the population of data sets". We introduce 12 relational agreement (RA) coefficients for seven metric scales, which are integrated within a group fusion-based similarity searching algorithm. These similarity measures are compared to a reference panel of 21 proximity quantifiers over 17 benchmark data sets (MUV), by using informative descriptors, a feature selection stage, a suitable performance metric, and powerful comparison tests. In this stage, RA coefficients perform favourably with repect to the state-of-the-art proximity measures. Afterward, the RA-based method outperform another four nearest neighbor searching algorithms over the same data domains. In a third validation stage, RA measures are successfully applied to the virtual screening of the NCI data set. Finally, we discuss a possible molecular interpretation for these similarity variants.

  4. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  5. An Overview and Empirical Comparison of Distance Metric Learning Methods.

    PubMed

    Moutafis, Panagiotis; Leng, Mengjun; Kakadiaris, Ioannis A

    2016-02-16

    In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.

  6. Removal of impulse noise clusters from color images with local order statistics

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper proposes a novel algorithm for restoring images corrupted with clusters of impulse noise. The noise clusters often occur when the probability of impulse noise is very high. The proposed noise removal algorithm consists of detection of bulky impulse noise in three color channels with local order statistics followed by removal of the detected clusters by means of vector median filtering. With the help of computer simulation we show that the proposed algorithm is able to effectively remove clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  7. Loops in hierarchical channel networks

    NASA Astrophysics Data System (ADS)

    Katifori, Eleni; Magnasco, Marcelo

    2012-02-01

    Nature provides us with many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture. Although a number of methods have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated and natural graphs extracted from digitized images of dicotyledonous leaves and animal vasculature. We calculate various metrics on the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  8. Hyperbolic Harmonic Mapping for Surface Registration

    PubMed Central

    Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng

    2016-01-01

    Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948

  9. Impulsive noise removal from color video with morphological filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  10. A mesh partitioning algorithm for preserving spatial locality in arbitrary geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nivarti, Girish V., E-mail: g.nivarti@alumni.ubc.ca; Salehi, M. Mahdi; Bushe, W. Kendal

    2015-01-15

    Highlights: •An algorithm for partitioning computational meshes is proposed. •The Morton order space-filling curve is modified to achieve improved locality. •A spatial locality metric is defined to compare results with existing approaches. •Results indicate improved performance of the algorithm in complex geometries. -- Abstract: A space-filling curve (SFC) is a proximity preserving linear mapping of any multi-dimensional space and is widely used as a clustering tool. Equi-sized partitioning of an SFC ignores the loss in clustering quality that occurs due to inaccuracies in the mapping. Often, this results in poor locality within partitions, especially for the conceptually simple, Morton ordermore » curves. We present a heuristic that improves partition locality in arbitrary geometries by slicing a Morton order curve at points where spatial locality is sacrificed. In addition, we develop algorithms that evenly distribute points to the extent possible while maintaining spatial locality. A metric is defined to estimate relative inter-partition contact as an indicator of communication in parallel computing architectures. Domain partitioning tests have been conducted on geometries relevant to turbulent reactive flow simulations. The results obtained highlight the performance of our method as an unsupervised and computationally inexpensive domain partitioning tool.« less

  11. Top-attack modeling and automatic target detection using synthetic FLIR scenery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Penn, Joseph A.

    2004-09-01

    A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations. Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions. Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.

  12. Performance evaluation of PCA-based spike sorting algorithms.

    PubMed

    Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George

    2008-09-01

    Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts.

  13. A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.

    PubMed

    D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer

    2018-06-27

    Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.

  14. TH-CD-209-01: A Greedy Reassignment Algorithm for the PBS Minimum Monitor Unit Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Y; Kooy, H; Craft, D

    2016-06-15

    Purpose: To investigate a Greedy Reassignment algorithm in order to mitigate the effects of low weight spots in proton pencil beam scanning (PBS) treatment plans. Methods: To convert a plan from the treatment planning system’s (TPS) to a deliverable plan, post processing methods can be used to adjust the spot maps to meets the minimum MU constraint. Existing methods include: deleting low weight spots (Cut method), or rounding spots with weight above/below half the limit up/down to the limit/zero (Round method). An alternative method called Greedy Reassignment was developed in this work in which the lowest weight spot in themore » field was removed and its weight reassigned equally among its nearest neighbors. The process was repeated with the next lowest weight spot until all spots in the field were above the MU constraint. The algorithm performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The evaluation criteria were the γ-index pass rate comparing the pre-processed and post-processed dose distributions. A planning metric was further developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. Results: For fields with a gamma pass rate of 90±1%, the metric has a standard deviation equal to 18% of the centroid value. This showed that the metric and γ-index pass rate are correlated for the Greedy Reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy Reassignment method had 1.8 times better metric at 90% pass rate compared to other post-processing methods. Conclusion: We showed that the Greedy Reassignment method yields deliverable plans that are closest to the optimized-without-MU-constraint plan from the TPS. The metric developed in this work could help design the minimum MU threshold with the goal of keeping the γ-index pass rate above an acceptable value.« less

  15. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge

    PubMed Central

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-01-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598

  16. Application of Support Vector Machine to Forex Monitoring

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Joarder; Sarker, Ruhul A.

    Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.

  17. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction.

    PubMed

    Stassi, D; Dutta, S; Ma, H; Soderman, A; Pazzani, D; Gros, E; Okerlund, D; Schmidt, T G

    2016-01-01

    Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.

  18. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  19. Evaluating and Improving Automatic Sleep Spindle Detection by Using Multi-Objective Evolutionary Algorithms

    PubMed Central

    Liu, Min-Yin; Huang, Adam; Huang, Norden E.

    2017-01-01

    Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz) measured by electroencephalography (EEG) mostly during non-rapid eye movement (NREM) stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1) the lack of common benchmark databases, and (2) the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA), the Strength Pareto Evolutionary Algorithm (SPEA2), to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT), and two Hilbert-Huang transform (HHT) based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737. PMID:28572762

  20. Comparative analysis of different weight matrices in subspace system identification for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Shokravi, H.; Bakhary, NH

    2017-11-01

    Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.

  1. Designing a Robust Micromixer Based on Fluid Stretching

    NASA Astrophysics Data System (ADS)

    Mott, David; Gautam, Dipesh; Voth, Greg; Oran, Elaine

    2010-11-01

    A metric for measuring fluid stretching based on finite-time Lyapunov exponents is described, and the use of this metric for optimizing mixing in microfluidic components is explored. The metric is implemented within an automated design approach called the Computational Toolbox (CTB). The CTB designs components by adding geometric features, such a grooves of various shapes, to a microchannel. The transport produced by each of these features in isolation was pre-computed and stored as an "advection map" for that feature, and the flow through a composite geometry that combines these features is calculated rapidly by applying the corresponding maps in sequence. A genetic algorithm search then chooses the feature combination that optimizes a user-specified metric. Metrics based on the variance of concentration generally require the user to specify the fluid distributions at inflow, which leads to different mixer designs for different inflow arrangements. The stretching metric is independent of the fluid arrangement at inflow. Mixers designed using the stretching metric are compared to those designed using a variance of concentration metric and show excellent performance across a variety of inflow distributions and diffusivities.

  2. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    PubMed

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-10-30

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.

  3. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning

    PubMed Central

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-01-01

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984

  4. An algorithmic and information-theoretic approach to multimetric index construction

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.

    2013-01-01

    The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified 4 metrics that combined to produce a -0.88 correlation with the human disturbance index. When compared to other methods, we find this algorithmic approach resulted in MMIs that were more predictive and comprise fewer metrics.

  5. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    NASA Astrophysics Data System (ADS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal spectral data at 1 s time intervals, which represents data collected by a mobile system operating in a dynamic radiation background environment; and one that represents static measurements with a foreground spectrum (background plus source) and a background spectrum. These data include controlled variations in both Source Related Factors (nuclide, nuclide combinations, activities, distances, collection times, shielding configurations, and background spectra) and Detector Related Factors (currently only gain shifts, but resolution changes and non-linear energy calibration errors will be added soon). The software tools will allow the developer to evaluate the performance impact of each of these factors. Although this first implementation is somewhat limited in scope, considering only NaI-based detection systems and two application domains, it is hoped that (with community feedback) a wider range of detector types and applications will be included in the future. This article describes the methods used for dataset creation, the software validation/performance measurement tools, the performance metrics used, and examples of baseline performance.

  6. Visual performance-based image enhancement methodology: an investigation of contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.

    2006-05-01

    While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.

  7. The LSST metrics analysis framework (MAF)

    NASA Astrophysics Data System (ADS)

    Jones, R. L.; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Željko; Krughoff, K. S.; Petry, Catherine; Ridgway, Stephen T.

    2014-07-01

    We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.

  8. Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms.

    PubMed

    Ozcift, Akin; Gulten, Arif

    2011-12-01

    Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    NASA Astrophysics Data System (ADS)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  10. Hardware Acceleration of Adaptive Neural Algorithms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less

  11. A Comparison of Three Algorithms for Orion Drogue Parachute Release

    NASA Technical Reports Server (NTRS)

    Matz, Daniel A.; Braun, Robert D.

    2015-01-01

    The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.

  12. Image sharpness assessment based on wavelet energy of edge area

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Hong; Zhang, Lei; Yang, Yifan; He, Lei; Sun, Mingui

    2018-04-01

    Image quality assessment is needed in multiple image processing areas and blur is one of the key reasons of image deterioration. Although great full-reference image quality assessment metrics have been proposed in the past few years, no-reference method is still an area of current research. Facing this problem, this paper proposes a no-reference sharpness assessment method based on wavelet transformation which focuses on the edge area of image. Based on two simple characteristics of human vision system, weights are introduced to calculate weighted log-energy of each wavelet sub band. The final score is given by the ratio of high-frequency energy to the total energy. The algorithm is tested on multiple databases. Comparing with several state-of-the-art metrics, proposed algorithm has better performance and less runtime consumption.

  13. Person Re-Identification via Distance Metric Learning With Latent Variables.

    PubMed

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2017-01-01

    In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.

  14. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    PubMed

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  15. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  16. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.

    PubMed

    Vimalarani, C; Subramanian, R; Sivanandam, S N

    2016-01-01

    Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.

  17. Machine Learning for Medical Imaging

    PubMed Central

    Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L.

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017 PMID:28212054

  18. Machine Learning for Medical Imaging.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. © RSNA, 2017.

  19. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  20. Generalising Ward's Method for Use with Manhattan Distances.

    PubMed

    Strauss, Trudie; von Maltitz, Michael Johan

    2017-01-01

    The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.

  1. On local search for bi-objective knapsack problems.

    PubMed

    Liefooghe, Arnaud; Paquete, Luís; Figueira, José Rui

    2013-01-01

    In this article, a local search approach is proposed for three variants of the bi-objective binary knapsack problem, with the aim of maximizing the total profit and minimizing the total weight. First, an experimental study on a given structural property of connectedness of the efficient set is conducted. Based on this property, a local search algorithm is proposed and its performance is compared to exact algorithms in terms of runtime and quality metrics. The experimental results indicate that this simple local search algorithm is able to find a representative set of optimal solutions in most of the cases, and in much less time than exact algorithms.

  2. Developing image processing meta-algorithms with data mining of multiple metrics.

    PubMed

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  3. Use of a machine learning algorithm to classify expertise: analysis of hand motion patterns during a simulated surgical task.

    PubMed

    Watson, Robert A

    2014-08-01

    To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.

  4. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    PubMed

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  6. Continuous Glucose Monitoring Enables the Detection of Losses in Infusion Set Actuation (LISAs)

    PubMed Central

    Howsmon, Daniel P.; Cameron, Faye; Baysal, Nihat; Ly, Trang T.; Forlenza, Gregory P.; Maahs, David M.; Buckingham, Bruce A.; Hahn, Juergen; Bequette, B. Wayne

    2017-01-01

    Reliable continuous glucose monitoring (CGM) enables a variety of advanced technology for the treatment of type 1 diabetes. In addition to artificial pancreas algorithms that use CGM to automate continuous subcutaneous insulin infusion (CSII), CGM can also inform fault detection algorithms that alert patients to problems in CGM or CSII. Losses in infusion set actuation (LISAs) can adversely affect clinical outcomes, resulting in hyperglycemia due to impaired insulin delivery. Prolonged hyperglycemia may lead to diabetic ketoacidosis—a serious metabolic complication in type 1 diabetes. Therefore, an algorithm for the detection of LISAs based on CGM and CSII signals was developed to improve patient safety. The LISA detection algorithm is trained retrospectively on data from 62 infusion set insertions from 20 patients. The algorithm collects glucose and insulin data, and computes relevant fault metrics over two different sliding windows; an alarm sounds when these fault metrics are exceeded. With the chosen algorithm parameters, the LISA detection strategy achieved a sensitivity of 71.8% and issued 0.28 false positives per day on the training data. Validation on two independent data sets confirmed that similar performance is seen on data that was not used for training. The developed algorithm is able to effectively alert patients to possible infusion set failures in open-loop scenarios, with limited evidence of its extension to closed-loop scenarios. PMID:28098839

  7. Continuous Glucose Monitoring Enables the Detection of Losses in Infusion Set Actuation (LISAs).

    PubMed

    Howsmon, Daniel P; Cameron, Faye; Baysal, Nihat; Ly, Trang T; Forlenza, Gregory P; Maahs, David M; Buckingham, Bruce A; Hahn, Juergen; Bequette, B Wayne

    2017-01-15

    Reliable continuous glucose monitoring (CGM) enables a variety of advanced technology for the treatment of type 1 diabetes. In addition to artificial pancreas algorithms that use CGM to automate continuous subcutaneous insulin infusion (CSII), CGM can also inform fault detection algorithms that alert patients to problems in CGM or CSII. Losses in infusion set actuation (LISAs) can adversely affect clinical outcomes, resulting in hyperglycemia due to impaired insulin delivery. Prolonged hyperglycemia may lead to diabetic ketoacidosis-a serious metabolic complication in type 1 diabetes. Therefore, an algorithm for the detection of LISAs based on CGM and CSII signals was developed to improve patient safety. The LISA detection algorithm is trained retrospectively on data from 62 infusion set insertions from 20 patients. The algorithm collects glucose and insulin data, and computes relevant fault metrics over two different sliding windows; an alarm sounds when these fault metrics are exceeded. With the chosen algorithm parameters, the LISA detection strategy achieved a sensitivity of 71.8% and issued 0.28 false positives per day on the training data. Validation on two independent data sets confirmed that similar performance is seen on data that was not used for training. The developed algorithm is able to effectively alert patients to possible infusion set failures in open-loop scenarios, with limited evidence of its extension to closed-loop scenarios.

  8. Prediction of fatigue-related driver performance from EEG data by deep Riemannian model.

    PubMed

    Hajinoroozi, Mehdi; Jianqiu Zhang; Yufei Huang

    2017-07-01

    Prediction of the drivers' drowsy and alert states is important for safety purposes. The prediction of drivers' drowsy and alert states from electroencephalography (EEG) using shallow and deep Riemannian methods is presented. For shallow Riemannian methods, the minimum distance to Riemannian mean (mdm) and Log-Euclidian metric are investigated, where it is shown that Log-Euclidian metric outperforms the mdm algorithm. In addition the SPDNet, a deep Riemannian model, that takes the EEG covariance matrix as the input is investigated. It is shown that SPDNet outperforms all tested shallow and deep classification methods. Performance of SPDNet is 6.02% and 2.86% higher than the best performance by the conventional Euclidian classifiers and shallow Riemannian models, respectively.

  9. Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.

    PubMed

    Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang

    2012-12-05

    Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (P<0.05). Correlation analysis between feature importance and the statistical significance of metrics was investigated, and the results revealed a strong positive correlation between them. Overall, the current study demonstrated that major depressive disorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.

  10. Core Hunter 3: flexible core subset selection.

    PubMed

    De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle

    2018-05-31

    Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .

  11. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  12. Computer-Aided Design and Optimization of High-Performance Vacuum Electronic Devices

    DTIC Science & Technology

    2006-08-15

    approximations to the metric, and space mapping wherein low-accuracy (coarse mesh) solutions can potentially be used more effectively in an...interface and algorithm development. • Work on space - mapping or related methods for utilizing models of varying levels of approximation within an

  13. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  14. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  15. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2011-05-01

    UPGMA algorithm with cosine correlation as the similarity metric. Results are present as a heat map (left panel) demonstrating that the panel of 18... UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as heat maps demonstrating the efficacy of using all 13

  16. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    PubMed

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on bladder volume difference for all but one algorithm, and with the best result for the structure-based algorithm. Spatial accuracy as assessed from marker errors was disappointing for all algorithms, especially for large volume differences, implying that the deformations described by the registration did not represent anatomically correct deformations. The structure-based algorithm performed the best in terms of marker error for the large volume difference (100-400 ml). In general, for the small volume difference (100-150 ml) the algorithms performed relatively similarly. The structure-based algorithm exhibited the best balance in performance between small and large volume differences, and among the intensity-based algorithms, the algorithm implemented in VelocityAI exhibited the best balance. Validation of multiple DIR algorithms on a novel physiological bladder phantom revealed that the structure accuracy was good for most algorithms, but that the spatial accuracy as assessed from markers was low for all algorithms, especially for large deformations. Hence, many of the available algorithms exhibit sufficient accuracy for contour propagation purposes, but possibly not for accurate dose accumulation.

  17. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  18. A multifaceted independent performance analysis of facial subspace recognition algorithms.

    PubMed

    Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan

    2013-01-01

    Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)(2)PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.

  19. Differential correlation for sequencing data.

    PubMed

    Siska, Charlotte; Kechris, Katerina

    2017-01-19

    Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.

  20. Assessing the quality of restored images in optical long-baseline interferometry

    NASA Astrophysics Data System (ADS)

    Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric

    2017-03-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.

  1. Improving personalized link prediction by hybrid diffusion

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Hu; Zhu, Yu-Xiao; Zhou, Tao

    2016-04-01

    Inspired by traditional link prediction and to solve the problem of recommending friends in social networks, we introduce the personalized link prediction in this paper, in which each individual will get equal number of diversiform predictions. While the performances of many classical algorithms are not satisfactory under this framework, thus new algorithms are in urgent need. Motivated by previous researches in other fields, we generalize heat conduction process to the framework of personalized link prediction and find that this method outperforms many classical similarity-based algorithms, especially in the performance of diversity. In addition, we demonstrate that adding one ground node that is supposed to connect all the nodes in the system will greatly benefit the performance of heat conduction. Finally, better hybrid algorithms composed of local random walk and heat conduction have been proposed. Numerical results show that the hybrid algorithms can outperform other algorithms simultaneously in all four adopted metrics: AUC, precision, recall and hamming distance. In a word, this work may shed some light on the in-depth understanding of the effect of physical processes in personalized link prediction.

  2. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  3. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.

    PubMed

    Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad

    2016-12-01

    Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.

  4. A Simple Algorithm for the Metric Traveling Salesman Problem

    NASA Technical Reports Server (NTRS)

    Grimm, M. J.

    1984-01-01

    An algorithm was designed for a wire list net sort problem. A branch and bound algorithm for the metric traveling salesman problem is presented for this. The algorithm is a best bound first recursive descent where the bound is based on the triangle inequality. The bounded subsets are defined by the relative order of the first K of the N cities (i.e., a K city subtour). When K equals N, the bound is the length of the tour. The algorithm is implemented as a one page subroutine written in the C programming language for the VAX 11/750. Average execution times for randomly selected planar points using the Euclidean metric are 0.01, 0.05, 0.42, and 3.13 seconds for ten, fifteen, twenty, and twenty-five cities, respectively. Maximum execution times for a hundred cases are less than eleven times the averages. The speed of the algorithms is due to an initial ordering algorithm that is a N squared operation. The algorithm also solves the related problem where the tour does not return to the starting city and the starting and/or ending cities may be specified. It is possible to extend the algorithm to solve a nonsymmetric problem satisfying the triangle inequality.

  5. Enhancing the performance of MOEAs: an experimental presentation of a new fitness guided mutation operator

    NASA Astrophysics Data System (ADS)

    Liagkouras, K.; Metaxiotis, K.

    2017-01-01

    Multi-objective evolutionary algorithms (MOEAs) are currently a dynamic field of research that has attracted considerable attention. Mutation operators have been utilized by MOEAs as variation mechanisms. In particular, polynomial mutation (PLM) is one of the most popular variation mechanisms and has been utilized by many well-known MOEAs. In this paper, we revisit the PLM operator and we propose a fitness-guided version of the PLM. Experimental results obtained by non-dominated sorting genetic algorithm II and strength Pareto evolutionary algorithm 2 show that the proposed fitness-guided mutation operator outperforms the classical PLM operator, based on different performance metrics that evaluate both the proximity of the solutions to the Pareto front and their dispersion on it.

  6. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  7. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  8. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  9. Hue-preserving and saturation-improved color histogram equalization algorithm.

    PubMed

    Song, Ki Sun; Kang, Hee; Kang, Moon Gi

    2016-06-01

    In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.

  10. Ring-push metric learning for person reidentification

    NASA Astrophysics Data System (ADS)

    He, Botao; Yu, Shaohua

    2017-05-01

    Person reidentification (re-id) has been widely studied because of its extensive use in video surveillance and forensics applications. It aims to search a specific person among a nonoverlapping camera network, which is highly challenging due to large variations in the cluttered background, human pose, and camera viewpoint. We present a metric learning algorithm for learning a Mahalanobis distance for re-id. Generally speaking, there exist two forces in the conventional metric learning process, one pulling force that pulls points of the same class closer and the other pushing force that pushes points of different classes as far apart as possible. We argue that, when only a limited number of training data are given, forcing interclass distances to be as large as possible may drive the metric to overfit the uninformative part of the images, such as noises and backgrounds. To alleviate overfitting, we propose the ring-push metric learning algorithm. Different from other metric learning methods that only punish too small interclass distances, in the proposed method, both too small and too large inter-class distances are punished. By introducing the generalized logistic function as the loss, we formulate the ring-push metric learning as a convex optimization problem and utilize the projected gradient descent method to solve it. The experimental results on four public datasets demonstrate the effectiveness of the proposed algorithm.

  11. Performance prediction: A case study using a multi-ring KSR-1 machine

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhu, Jianping

    1995-01-01

    While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less

  13. Effects of the bipartite structure of a network on performance of recommenders

    NASA Astrophysics Data System (ADS)

    Wang, Qing-Xian; Li, Jian; Luo, Xin; Xu, Jian-Jun; Shang, Ming-Sheng

    2018-02-01

    Recommender systems aim to predict people's preferences for online items by analyzing their historical behaviors. A recommender can be modeled as a high-dimensional and sparse bipartite network, where the key issue is to understand the relation between the network structure and a recommender's performance. To address this issue, we choose three network characteristics, clustering coefficient, network density and user-item ratio, as the analyzing targets. For the cluster coefficient, we adopt the Degree-preserving rewiring algorithm to obtain a series of bipartite network with varying cluster coefficient, while the degree of user and item keep unchanged. Furthermore, five state-of-the-art recommenders are applied on two real datasets. The performances of recommenders are measured by both numerical and physical metrics. These results show that a recommender's performance is positively related to the clustering coefficient of a bipartite network. Meanwhile, higher density of a bipartite network can provide more accurate but less diverse or novel recommendations. Furthermore, the user-item ratio is positively correlated with the accuracy metrics but negatively correlated with the diverse and novel metrics.

  14. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  15. Predicting missing links in complex networks based on common neighbors and distance

    PubMed Central

    Yang, Jinxuan; Zhang, Xiao-Dong

    2016-01-01

    The algorithms based on common neighbors metric to predict missing links in complex networks are very popular, but most of these algorithms do not account for missing links between nodes with no common neighbors. It is not accurate enough to reconstruct networks by using these methods in some cases especially when between nodes have less common neighbors. We proposed in this paper a new algorithm based on common neighbors and distance to improve accuracy of link prediction. Our proposed algorithm makes remarkable effect in predicting the missing links between nodes with no common neighbors and performs better than most existing currently used methods for a variety of real-world networks without increasing complexity. PMID:27905526

  16. WE-E-217A-02: Methodologies for Evaluation of Standalone CAD System Performance.

    PubMed

    Sahiner, B

    2012-06-01

    Standalone performance evaluation of a CAD system provides information about the abnormality detection or classification performance of the computerized system alone. Although the performance of the reader with CAD is the final step in CAD system assessment, standalone performance evaluation is an important component for several reasons: First, standalone evaluation informs the reader about the performance level of the CAD system and may have an impact on how the reader uses the system. Second, it provides essential information to the system designer for algorithm optimization during system development. Third, standalone evaluation can provide a detailed description of algorithm performance (e.g., on subgroups of the population) because a larger data set with more samples from different subgroups can be included in standalone studies compared to reader studies. Proper standalone evaluation of a CAD system involves a number of key components, some of which are shared with the assessment of reader performance with CAD. These include (1) selection of a test data set that allows performance assessment with little or no bias and acceptable uncertainty; (2) a reference standard that indicates disease status as well as the location and extent of disease; (3) a clearly defined method for labeling each CAD mark as a true-positive or false-positive; and (4) a properly selected set of metrics to summarize the accuracy of the computer marks and their corresponding scores. In this lecture, we will discuss various approaches for the key components of standalone CAD performance evaluation listed above, and present some of the recommendations and opinions from the AAPM CAD subcommittee on these issues. Learning Objectives 1. Identify basic components and metrics in the assessment of standalone CAD systems 2. Understand how each component may affect the assessed performance 3. Learn about AAPM CAD subcommittee's opinions and recommendations on factors and metrics related to the evaluation of standalone CAD system performance. © 2012 American Association of Physicists in Medicine.

  17. Information theoretic analysis of canny edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  18. The 4A Metric Algorithm: A Unique E-Learning Engineering Solution Designed via Neuroscience to Counter Cheating and Reduce Its Recidivism by Measuring Student Growth through Systemic Sequential Online Learning

    ERIC Educational Resources Information Center

    Osler, James Edward

    2016-01-01

    This paper provides a novel instructional methodology that is a unique E-Learning engineered "4A Metric Algorithm" designed to conceptually address the four main challenges faced by 21st century students, who are tempted to cheat in a myriad of higher education settings (face to face, hybrid, and online). The algorithmic online…

  19. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  20. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  1. Learning Compositional Shape Models of Multiple Distance Metrics by Information Projection.

    PubMed

    Luo, Ping; Lin, Liang; Liu, Xiaobai

    2016-07-01

    This paper presents a novel compositional contour-based shape model by incorporating multiple distance metrics to account for varying shape distortions or deformations. Our approach contains two key steps: 1) contour feature generation and 2) generative model pursuit. For each category, we first densely sample an ensemble of local prototype contour segments from a few positive shape examples and describe each segment using three different types of distance metrics. These metrics are diverse and complementary with each other to capture various shape deformations. We regard the parameterized contour segment plus an additive residual ϵ as a basic subspace, namely, ϵ -ball, in the sense that it represents local shape variance under the certain distance metric. Using these ϵ -balls as features, we then propose a generative learning algorithm to pursue the compositional shape model, which greedily selects the most representative features under the information projection principle. In experiments, we evaluate our model on several public challenging data sets, and demonstrate that the integration of multiple shape distance metrics is capable of dealing various shape deformations, articulations, and background clutter, hence boosting system performance.

  2. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  3. New mathematical modeling for a location-routing-inventory problem in a multi-period closed-loop supply chain in a car industry

    NASA Astrophysics Data System (ADS)

    Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.

    2017-11-01

    This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.

  4. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  5. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    PubMed

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  6. Kepler Planet Detection Metrics: Automatic Detection of Background Objects Using the Centroid Robovetter

    NASA Technical Reports Server (NTRS)

    Mullally, Fergal

    2017-01-01

    We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.

  7. Multi-Robot Assembly Strategies and Metrics.

    PubMed

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  8. Multiuser signal detection using sequential decoding

    NASA Astrophysics Data System (ADS)

    Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.

    1990-05-01

    The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.

  9. Multi-Robot Assembly Strategies and Metrics

    PubMed Central

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  10. Clinically relevant hypoglycemia prediction metrics for event mitigation.

    PubMed

    Harvey, Rebecca A; Dassau, Eyal; Zisser, Howard C; Bevier, Wendy; Seborg, Dale E; Jovanovič, Lois; Doyle, Francis J

    2012-08-01

    The purpose of this study was to develop a method to compare hypoglycemia prediction algorithms and choose parameter settings for different applications, such as triggering insulin pump suspension or alerting for rescue carbohydrate treatment. Hypoglycemia prediction algorithms with different parameter settings were implemented on an ambulatory dataset containing 490 days from 30 subjects with type 1 diabetes mellitus using the Dexcom™ (San Diego, CA) SEVEN™ continuous glucose monitoring system. The performance was evaluated using a proposed set of metrics representing the true-positive ratio, false-positive rate, and distribution of warning times. A prospective, in silico study was performed to show the effect of using different parameter settings to prevent or rescue from hypoglycemia. The retrospective study results suggest the parameter settings for different methods of hypoglycemia mitigation. When rescue carbohydrates are used, a high true-positive ratio, a minimal false-positive rate, and alarms with short warning time are desired. These objectives were met with a 30-min prediction horizon and two successive flags required to alarm: 78% of events were detected with 3.0 false alarms/day and 66% probability of alarms occurring within 30 min of the event. This parameter setting selection was confirmed in silico: treating with rescue carbohydrates reduced the duration of hypoglycemia from 14.9% to 0.5%. However, for a different method, such as pump suspension, this parameter setting only reduced hypoglycemia to 8.7%, as can be expected by the low probability of alarming more than 30 min ahead. The proposed metrics allow direct comparison of hypoglycemia prediction algorithms and selection of parameter settings for different types of hypoglycemia mitigation, as shown in the prospective in silico study in which hypoglycemia was alerted or treated with rescue carbohydrates.

  11. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.

    In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less

  13. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A.

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Fivemore » excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure accuracy without dependency on bladder volume difference for all but one algorithm, and with the best result for the structure-based algorithm. Spatial accuracy as assessed from marker errors was disappointing for all algorithms, especially for large volume differences, implying that the deformations described by the registration did not represent anatomically correct deformations. The structure-based algorithm performed the best in terms of marker error for the large volume difference (100–400 ml). In general, for the small volume difference (100–150 ml) the algorithms performed relatively similarly. The structure-based algorithm exhibited the best balance in performance between small and large volume differences, and among the intensity-based algorithms, the algorithm implemented in VelocityAI exhibited the best balance. Conclusions: Validation of multiple DIR algorithms on a novel physiological bladder phantom revealed that the structure accuracy was good for most algorithms, but that the spatial accuracy as assessed from markers was low for all algorithms, especially for large deformations. Hence, many of the available algorithms exhibit sufficient accuracy for contour propagation purposes, but possibly not for accurate dose accumulation.« less

  14. A Methodology to Analyze Photovoltaic Tracker Uptime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Matthew T; Ruth, Dan

    A metric is developed to analyze the daily performance of single-axis photovoltaic (PV) trackers. The metric relies on comparing correlations between the daily time series of the PV power output and an array of simulated plane-of-array irradiances for the given day. Mathematical thresholds and a logic sequence are presented, so the daily tracking metric can be applied in an automated fashion on large-scale PV systems. The results of applying the metric are visually examined against the time series of the power output data for a large number of days and for various systems. The visual inspection results suggest that overall,more » the algorithm is accurate in identifying stuck or functioning trackers on clear-sky days. Visual inspection also shows that there are days that are not classified by the metric where the power output data may be sufficient to identify a stuck tracker. Based on the daily tracking metric, uptime results are calculated for 83 different inverters at 34 PV sites. The mean tracker uptime is calculated at 99% based on 2 different calculation methods. The daily tracking metric clearly has limitations, but as there is no existing metrics in the literature, it provides a valuable tool for flagging stuck trackers.« less

  15. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  16. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  17. Cost Metric Algorithms for Internetwork Applications

    DTIC Science & Technology

    1989-04-01

    5000. Released by Under authority of M. B. Vineberg, Head . X E Jahn, Head System Design and Battle Force and Theater Architechture Branch...for public release; distribution unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING ORGANIZATION REPORT NUMBER(S) NOSC TR 1284 6a...NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBO 7a. NAME OF MONITORING ORGANIZATION Naval Ocean Systems Center Code 854 6c. ADDRESS (C, SftW&WZPCa

  18. Quantum Adiabatic Brachistochrone

    NASA Astrophysics Data System (ADS)

    Rezakhani, A. T.; Kuo, W.-J.; Hamma, A.; Lidar, D. A.; Zanardi, P.

    2009-08-01

    We formulate a time-optimal approach to adiabatic quantum computation (AQC). A corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. This geometrization of AQC is demonstrated through two examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.

  19. Quantum adiabatic brachistochrone.

    PubMed

    Rezakhani, A T; Kuo, W-J; Hamma, A; Lidar, D A; Zanardi, P

    2009-08-21

    We formulate a time-optimal approach to adiabatic quantum computation (AQC). A corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. This geometrization of AQC is demonstrated through two examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.

  20. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  1. Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living

    PubMed Central

    Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.

    2015-01-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612

  2. Assessing upper extremity motor function in practice of virtual activities of daily living.

    PubMed

    Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T

    2015-03-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.

  3. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    PubMed

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming distance using the learned binary representation. A boosting algorithm is presented to efficiently learn the distance function. We evaluate the proposed algorithm on a mammographic image reference library with an Interactive Search-Assisted Decision Support (ISADS) system and on the medical image data set from ImageCLEF. Our results show that the boosting framework compares favorably to state-of-the-art approaches for distance metric learning in retrieval accuracy, with much lower computational cost. Additional evaluation with the COREL collection shows that our algorithm works well for regular image data sets.

  4. Symmetry-based detection and diagnosis of DCIS in breast MRI

    NASA Astrophysics Data System (ADS)

    Srikantha, Abhilash; Harz, Markus T.; Newstead, Gillian; Wang, Lei; Platel, Bram; Hegenscheid, Katrin; Mann, Ritse M.; Hahn, Horst K.; Peitgen, Heinz-Otto

    2013-02-01

    The delineation and diagnosis of non-mass-like lesions, most notably DCIS (ductal carcinoma in situ), is among the most challenging tasks in breast MRI reading. Even for human observers, DCIS is not always easy to diferentiate from patterns of active parenchymal enhancement or from benign alterations of breast tissue. In this light, it is no surprise that CADe/CADx approaches often completely fail to classify DCIS. Of the several approaches that have tried to devise such computer aid, none achieve performances similar to mass detection and classification in terms of sensitivity and specificity. In our contribution, we show a novel approach to combine a newly proposed metric of anatomical breast symmetry calculated on subtraction images of dynamic contrast-enhanced (DCE) breast MRI, descriptive kinetic parameters, and lesion candidate morphology to achieve performances comparable to computer-aided methods used for masses. We have based the development of the method on DCE MRI data of 18 DCIS cases with hand-annotated lesions, complemented by DCE-MRI data of nine normal cases. We propose a novel metric to quantify the symmetry of contralateral breasts and derive a strong indicator for potentially malignant changes from this metric. Also, we propose a novel metric for the orientation of a finding towards a fix point (the nipple). Our combined scheme then achieves a sensitivity of 89% with a specificity of 78%, matching CAD results for breast MRI on masses. The processing pipeline is intended to run on a CAD server, hence we designed all processing to be automated and free of per-case parameters. We expect that the detection results of our proposed non-mass aimed algorithm will complement other CAD algorithms, or ideally be joined with them in a voting scheme.

  5. Not seeing the forest for the trees: size of the minimum spanning trees (MSTs) forest and branch significance in MST-based phylogenetic analysis.

    PubMed

    Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P

    2015-01-01

    Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.

  6. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  7. Routing and Scheduling Algorithms for WirelessHART Networks: A Survey

    PubMed Central

    Nobre, Marcelo; Silva, Ivanovitch; Guedes, Luiz Affonso

    2015-01-01

    Wireless communication is a trend nowadays for the industrial environment. A number of different technologies have emerged as solutions satisfying strict industrial requirements (e.g., WirelessHART, ISA100.11a, WIA-PA). As the industrial environment presents a vast range of applications, adopting an adequate solution for each case is vital to obtain good performance of the system. In this context, the routing and scheduling schemes associated with these technologies have a direct impact on important features, like latency and energy consumption. This situation has led to the development of a vast number of routing and scheduling schemes. In the present paper, we focus on the WirelessHART technology, emphasizing its most important routing and scheduling aspects in order to guide both end users and the developers of new algorithms. Furthermore, we provide a detailed literature review of the newest routing and scheduling techniques for WirelessHART, discussing each of their features. These routing algorithms have been evaluated in terms of their objectives, metrics, the usage of the WirelessHART structures and validation method. In addition, the scheduling algorithms were also evaluated by metrics, validation, objectives and, in addition, by multiple superframe support, as well as by the redundancy method used. Moreover, this paper briefly presents some insights into the main WirelessHART simulation modules available, in order to provide viable test platforms for the routing and scheduling algorithms. Finally, some open issues in WirelessHART routing and scheduling algorithms are discussed. PMID:25919371

  8. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  9. Reproducibility of graph metrics of human brain structural networks.

    PubMed

    Duda, Jeffrey T; Cook, Philip A; Gee, James C

    2014-01-01

    Recent interest in human brain connectivity has led to the application of graph theoretical analysis to human brain structural networks, in particular white matter connectivity inferred from diffusion imaging and fiber tractography. While these methods have been used to study a variety of patient populations, there has been less examination of the reproducibility of these methods. A number of tractography algorithms exist and many of these are known to be sensitive to user-selected parameters. The methods used to derive a connectivity matrix from fiber tractography output may also influence the resulting graph metrics. Here we examine how these algorithm and parameter choices influence the reproducibility of proposed graph metrics on a publicly available test-retest dataset consisting of 21 healthy adults. The dice coefficient is used to examine topological similarity of constant density subgraphs both within and between subjects. Seven graph metrics are examined here: mean clustering coefficient, characteristic path length, largest connected component size, assortativity, global efficiency, local efficiency, and rich club coefficient. The reproducibility of these network summary measures is examined using the intraclass correlation coefficient (ICC). Graph curves are created by treating the graph metrics as functions of a parameter such as graph density. Functional data analysis techniques are used to examine differences in graph measures that result from the choice of fiber tracking algorithm. The graph metrics consistently showed good levels of reproducibility as measured with ICC, with the exception of some instability at low graph density levels. The global and local efficiency measures were the most robust to the choice of fiber tracking algorithm.

  10. Predicting surface fuel models and fuel metrics using lidar and CIR imagery in a dense mixed conifer forest

    Treesearch

    Marek K. Jakubowksi; Qinghua Guo; Brandon Collins; Scott Stephens; Maggi Kelly

    2013-01-01

    We compared the ability of several classification and regression algorithms to predict forest stand structure metrics and standard surface fuel models. Our study area spans a dense, topographically complex Sierra Nevada mixed-conifer forest. We used clustering, regression trees, and support vector machine algorithms to analyze high density (average 9 pulses/m

  11. Kleinberg Complex Networks

    DTIC Science & Technology

    2014-10-21

    linear combinations of paths. This project featured research on two classes of routing problems , namely traveling salesman problems and multicommodity...flows. One highlight of this research was our discovery of a polynomial-time algorithm for the metric traveling salesman s-t path problem whose...metric TSP would resolve one of the most venerable open problems in the theory of approximation algorithms. Our research on traveling salesman

  12. An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism

    NASA Astrophysics Data System (ADS)

    Nagata, Yuichi

    Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.

  13. A novel rotational invariants target recognition method for rotating motion blurred images

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  14. Evaluation of a deformable registration algorithm for subsequent lung computed tomography imaging during radiochemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stützer, Kristin; Haase, Robert; Exner, Florian

    2016-09-15

    Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less

  15. Important LiDAR metrics for discriminating forest tree species in Central Europe

    NASA Astrophysics Data System (ADS)

    Shi, Yifang; Wang, Tiejun; Skidmore, Andrew K.; Heurich, Marco

    2018-03-01

    Numerous airborne LiDAR-derived metrics have been proposed for classifying tree species. Yet an in-depth ecological and biological understanding of the significance of these metrics for tree species mapping remains largely unexplored. In this paper, we evaluated the performance of 37 frequently used LiDAR metrics derived under leaf-on and leaf-off conditions, respectively, for discriminating six different tree species in a natural forest in Germany. We firstly assessed the correlation between these metrics. Then we applied a Random Forest algorithm to classify the tree species and evaluated the importance of the LiDAR metrics. Finally, we identified the most important LiDAR metrics and tested their robustness and transferability. Our results indicated that about 60% of LiDAR metrics were highly correlated to each other (|r| > 0.7). There was no statistically significant difference in tree species mapping accuracy between the use of leaf-on and leaf-off LiDAR metrics. However, combining leaf-on and leaf-off LiDAR metrics significantly increased the overall accuracy from 58.2% (leaf-on) and 62.0% (leaf-off) to 66.5% as well as the kappa coefficient from 0.47 (leaf-on) and 0.51 (leaf-off) to 0.58. Radiometric features, especially intensity related metrics, provided more consistent and significant contributions than geometric features for tree species discrimination. Specifically, the mean intensity of first-or-single returns as well as the mean value of echo width were identified as the most robust LiDAR metrics for tree species discrimination. These results indicate that metrics derived from airborne LiDAR data, especially radiometric metrics, can aid in discriminating tree species in a mixed temperate forest, and represent candidate metrics for tree species classification and monitoring in Central Europe.

  16. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  17. Fault-tolerant bandwidth reservation strategies for data transfers in high-performance networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Liudong; Zhu, Michelle M.; Wu, Chase Q.

    2016-11-22

    Many next-generation e-science applications need fast and reliable transfer of large volumes of data with guaranteed performance, which is typically enabled by the bandwidth reservation service in high-performance networks. One prominent issue in such network environments with large footprints is that node and link failures are inevitable, hence potentially degrading the quality of data transfer. We consider two generic types of bandwidth reservation requests (BRRs) concerning data transfer reliability: (i) to achieve the highest data transfer reliability under a given data transfer deadline, and (ii) to achieve the earliest data transfer completion time while satisfying a given data transfer reliabilitymore » requirement. We propose two periodic bandwidth reservation algorithms with rigorous optimality proofs to optimize the scheduling of individual BRRs within BRR batches. The efficacy of the proposed algorithms is illustrated through extensive simulations in comparison with scheduling algorithms widely adopted in production networks in terms of various performance metrics.« less

  18. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  19. Woofer-tweeter adaptive optics scanning laser ophthalmoscopic imaging based on Lagrange-multiplier damped least-squares algorithm.

    PubMed

    Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A

    2011-07-01

    We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.

  20. An experimental comparison of online object-tracking algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Chen, Feng; Xu, Wenli; Yang, Ming-Hsuan

    2011-09-01

    This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT,1 VRT,2 FragT,3 BoostT,4 SemiT,5 BeSemiT,6 L1T,7 MILT,8 VTD9 and TLD10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.

  1. Geographically Modified PageRank Algorithms: Identifying the Spatial Concentration of Human Movement in a Geospatial Network.

    PubMed

    Chin, Wei-Chien-Benny; Wen, Tzai-Hung

    2015-01-01

    A network approach, which simplifies geographic settings as a form of nodes and links, emphasizes the connectivity and relationships of spatial features. Topological networks of spatial features are used to explore geographical connectivity and structures. The PageRank algorithm, a network metric, is often used to help identify important locations where people or automobiles concentrate in the geographical literature. However, geographic considerations, including proximity and location attractiveness, are ignored in most network metrics. The objective of the present study is to propose two geographically modified PageRank algorithms-Distance-Decay PageRank (DDPR) and Geographical PageRank (GPR)-that incorporate geographic considerations into PageRank algorithms to identify the spatial concentration of human movement in a geospatial network. Our findings indicate that in both intercity and within-city settings the proposed algorithms more effectively capture the spatial locations where people reside than traditional commonly-used network metrics. In comparing location attractiveness and distance decay, we conclude that the concentration of human movement is largely determined by the distance decay. This implies that geographic proximity remains a key factor in human mobility.

  2. A framework for evaluating mixture analysis algorithms

    NASA Astrophysics Data System (ADS)

    Dasaratha, Sridhar; Vignesh, T. S.; Shanmukh, Sarat; Yarra, Malathi; Botonjic-Sehic, Edita; Grassi, James; Boudries, Hacene; Freeman, Ivan; Lee, Young K.; Sutherland, Scott

    2010-04-01

    In recent years, several sensing devices capable of identifying unknown chemical and biological substances have been commercialized. The success of these devices in analyzing real world samples is dependent on the ability of the on-board identification algorithm to de-convolve spectra of substances that are mixtures. To develop effective de-convolution algorithms, it is critical to characterize the relationship between the spectral features of a substance and its probability of detection within a mixture, as these features may be similar to or overlap with other substances in the mixture and in the library. While it has been recognized that these aspects pose challenges to mixture analysis, a systematic effort to quantify spectral characteristics and their impact, is generally lacking. In this paper, we propose metrics that can be used to quantify these spectral features. Some of these metrics, such as a modification of variance inflation factor, are derived from classical statistical measures used in regression diagnostics. We demonstrate that these metrics can be correlated to the accuracy of the substance's identification in a mixture. We also develop a framework for characterizing mixture analysis algorithms, using these metrics. Experimental results are then provided to show the application of this framework to the evaluation of various algorithms, including one that has been developed for a commercial device. The illustration is based on synthetic mixtures that are created from pure component Raman spectra measured on a portable device.

  3. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  4. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  5. Research on cardiovascular disease prediction based on distance metric learning

    NASA Astrophysics Data System (ADS)

    Ni, Zhuang; Liu, Kui; Kang, Guixia

    2018-04-01

    Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.

  6. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    PubMed Central

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  7. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  8. Distributed consensus for metamorphic systems using a gossip algorithm for CAT(0) metric spaces

    NASA Astrophysics Data System (ADS)

    Bellachehab, Anass; Jakubowicz, Jérémie

    2015-01-01

    We present an application of distributed consensus algorithms to metamorphic systems. A metamorphic system is a set of identical units that can self-assemble to form a rigid structure. For instance, one can think of a robotic arm composed of multiple links connected by joints. The system can change its shape in order to adapt to different environments via reconfiguration of its constituting units. We assume in this work that several metamorphic systems form a network: two systems are connected whenever they are able to communicate with each other. The aim of this paper is to propose a distributed algorithm that synchronizes all the systems in the network. Synchronizing means that all the systems should end up having the same configuration. This aim is achieved in two steps: (i) we cast the problem as a consensus problem on a metric space and (ii) we use a recent distributed consensus algorithm that only make use of metrical notions.

  9. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  10. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

  11. Atmospheric River Tracking Method Intercomparison Project (ARTMIP): Science Goals and Preliminary Analysis

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Rutz, J. J.; Wehner, M. F.; Ralph, F. M.; Leung, L. R.

    2017-12-01

    The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is a community effort whose purpose is to quantify uncertainties in atmospheric river (AR) research solely due to different identification and tracking techniques. Atmospheric rivers transport significant amounts of moisture in long, narrow filamentary bands, typically travelling from the subtropics to the mid-latitudes. They are an important source of regional precipitation impacting local hydroclimate, and in extreme cases, cause severe flooding and infrastructure damage in local communities. Our understanding of ARs, from forecast skill to future climate projections, all hinge on how we define ARs. By comparing a diverse set of detection algorithms, the uncertainty in our definition of ARs, (including statistics and climatology), and the implications of those uncertainties, can be analyzed and quantified. ARTMIP is divided into two broad phases that aim to answer science questions impacted by choice of detection algorithm. How robust are AR metrics such as climatology, storm duration, and relationship to extreme precipitation? How are the AR metrics in future climate projections impacted by choice of algorithm? Some algorithms rely on threshold values for water vapor. In a warmer world, the background state, by definition, is moister due to the Clausius-Clapeyron relationship, and could potentially skew results. Can uncertainty bounds be accurately placed on each metric? Tier 1 participants will apply their algorithms to a high resolution common dataset (MERRA2) and provide the greater group AR metrics (frequency, location, duration, etc). Tier 2 research will encompass sensitivity studies regarding resolution, reanalysis choice, and future climate change scenarios. ARTMIP is currently in the Tier 1 Phase and will begin Tier 2 in 2018. Preliminary metrics and analysis from Tier 1 will be presented.

  12. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    PubMed

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  13. Virtual Network Embedding via Monte Carlo Tree Search.

    PubMed

    Haeri, Soroush; Trajkovic, Ljiljana

    2018-02-01

    Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.

  14. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  15. An Unsupervised Online Spike-Sorting Framework.

    PubMed

    Knieling, Simeon; Sridharan, Kousik S; Belardinelli, Paolo; Naros, Georgios; Weiss, Daniel; Mormann, Florian; Gharabaghi, Alireza

    2016-08-01

    Extracellular neuronal microelectrode recordings can include action potentials from multiple neurons. To separate spikes from different neurons, they can be sorted according to their shape, a procedure referred to as spike-sorting. Several algorithms have been reported to solve this task. However, when clustering outcomes are unsatisfactory, most of them are difficult to adjust to achieve the desired results. We present an online spike-sorting framework that uses feature normalization and weighting to maximize the distinctiveness between different spike shapes. Furthermore, multiple criteria are applied to either facilitate or prevent cluster fusion, thereby enabling experimenters to fine-tune the sorting process. We compare our method to established unsupervised offline (Wave_Clus (WC)) and online (OSort (OS)) algorithms by examining their performance in sorting various test datasets using two different scoring systems (AMI and the Adamos metric). Furthermore, we evaluate sorting capabilities on intra-operative recordings using established quality metrics. Compared to WC and OS, our algorithm achieved comparable or higher scores on average and produced more convincing sorting results for intra-operative datasets. Thus, the presented framework is suitable for both online and offline analysis and could substantially improve the quality of microelectrode-based data evaluation for research and clinical application.

  16. Performance of biometric quality measures.

    PubMed

    Grother, Patrick; Tabassi, Elham

    2007-04-01

    We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.

  17. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for optionmore » pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.« less

  18. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  19. Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool

    NASA Astrophysics Data System (ADS)

    Ahsan, Muhammad

    The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology, we find optimal designs for different types of quantum adders. Conclusively, we show that 2,048-bit Shor's algorithm can be reliably executed within the resource budget of 1.5 million qubits.

  20. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  1. Geometrizing adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Rezakhani, Ali; Kuo, Wan-Jung; Hamma, Alioscia; Lidar, Daniel; Zanardi, Paolo

    2010-03-01

    A time-optimal approach to adiabatic quantum computation (AQC) is formulated. The corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. We demonstrate this geometrization through some examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance. The underlying connection with quantum phase transitions is also explored.

  2. Review, evaluation, and discussion of the challenges of missing value imputation for mass spectrometry-based label-free global proteomics

    DOE PAGES

    Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.; ...

    2015-04-09

    In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less

  3. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  4. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  5. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  6. How do we choose the best model? The impact of cross-validation design on model evaluation for buried threat detection in ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.

    2018-04-01

    A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.

  7. Relative risk reduction is useful metric to standardize effect size for public heath interventions for translational research.

    PubMed

    Mirzazadeh, Ali; Malekinejad, Mohsen; Kahn, James G

    2015-03-01

    Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of human immunodeficiency virus [HIV testing and counseling (HTC)]. We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, eg, consistent condom use) and undesirable (70.5%, eg, inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR and provide the conceptual basis for advantages of RRR over other metrics. Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underused by original and review studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Quantifying loopy network architectures.

    PubMed

    Katifori, Eleni; Magnasco, Marcelo O

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  9. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  10. Science Concierge: A Fast Content-Based Recommendation System for Scientific Publications.

    PubMed

    Achakulvisut, Titipat; Acuna, Daniel E; Ruangrong, Tulakan; Kording, Konrad

    2016-01-01

    Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.

  11. Science Concierge: A Fast Content-Based Recommendation System for Scientific Publications

    PubMed Central

    Achakulvisut, Titipat; Acuna, Daniel E.; Ruangrong, Tulakan; Kording, Konrad

    2016-01-01

    Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate. PMID:27383424

  12. Analysis of precipitation data in Bangladesh through hierarchical clustering and multidimensional scaling

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Habibur; Matin, M. A.; Salma, Umma

    2017-12-01

    The precipitation patterns of seventeen locations in Bangladesh from 1961 to 2014 were studied using a cluster analysis and metric multidimensional scaling. In doing so, the current research applies four major hierarchical clustering methods to precipitation in conjunction with different dissimilarity measures and metric multidimensional scaling. A variety of clustering algorithms were used to provide multiple clustering dendrograms for a mixture of distance measures. The dendrogram of pre-monsoon rainfall for the seventeen locations formed five clusters. The pre-monsoon precipitation data for the areas of Srimangal and Sylhet were located in two clusters across the combination of five dissimilarity measures and four hierarchical clustering algorithms. The single linkage algorithm with Euclidian and Manhattan distances, the average linkage algorithm with the Minkowski distance, and Ward's linkage algorithm provided similar results with regard to monsoon precipitation. The results of the post-monsoon and winter precipitation data are shown in different types of dendrograms with disparate combinations of sub-clusters. The schematic geometrical representations of the precipitation data using metric multidimensional scaling showed that the post-monsoon rainfall of Cox's Bazar was located far from those of the other locations. The results of a box-and-whisker plot, different clustering techniques, and metric multidimensional scaling indicated that the precipitation behaviour of Srimangal and Sylhet during the pre-monsoon season, Cox's Bazar and Sylhet during the monsoon season, Maijdi Court and Cox's Bazar during the post-monsoon season, and Cox's Bazar and Khulna during the winter differed from those at other locations in Bangladesh.

  13. Technical Note: Using k-means clustering to determine the number and position of isocenters in MLC-based multiple target intracranial radiosurgery.

    PubMed

    Yock, Adam D; Kim, Gwe-Ya

    2017-09-01

    To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  15. Is it worth changing pattern recognition methods for structural health monitoring?

    NASA Astrophysics Data System (ADS)

    Bull, L. A.; Worden, K.; Cross, E. J.; Dervilis, N.

    2017-05-01

    The key element of this work is to demonstrate alternative strategies for using pattern recognition algorithms whilst investigating structural health monitoring. This paper looks to determine if it makes any difference in choosing from a range of established classification techniques: from decision trees and support vector machines, to Gaussian processes. Classification algorithms are tested on adjustable synthetic data to establish performance metrics, then all techniques are applied to real SHM data. To aid the selection of training data, an informative chain of artificial intelligence tools is used to explore an active learning interaction between meaningful clusters of data.

  16. Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan; Jung, Yoon; Lee, Hanbong; Schier, Sebastian; Okuniek, Nikolai; Gerdes, Ingrid

    2016-01-01

    In this work, fast-time simulations have been conducted using SARDA tools at Hamburg airport by NASA and real-time simulations using CADEO and TRACC with the NLR ATM Research Simulator (NARSIM) by DLR. The outputs are analyzed using a set of common metrics collaborated between DLR and NASA. The proposed metrics are derived from International Civil Aviation Organization (ICAO)s Key Performance Areas (KPAs) in capability, efficiency, predictability and environment, and adapted to simulation studies. The results are examined to explore and compare the merits and shortcomings of the two approaches using the common performance metrics. Particular attention is paid to the concept of the close-loop, trajectory-based taxi as well as the application of US concept to the European airport. Both teams consider the trajectory-based surface operation concept a critical technology advance in not only addressing the current surface traffic management problems, but also having potential application in unmanned vehicle maneuver on airport surface, such as autonomous towing or TaxiBot [6][7] and even Remote Piloted Aircraft (RPA). Based on this work, a future integration of TRACC and SOSS is described aiming at bringing conflict-free trajectory-based operation concept to US airport.

  17. Goal-oriented evaluation of binarization algorithms for historical document images

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady

    2013-01-01

    Binarization is of significant importance in document analysis systems. It is an essential first step, prior to further stages such as Optical Character Recognition (OCR), document segmentation, or enhancement of readability of the document after some restoration stages. Hence, proper evaluation of binarization methods to verify their effectiveness is of great value to the document analysis community. In this work, we perform a detailed goal-oriented evaluation of image quality assessment of the 18 binarization methods that participated in the DIBCO 2011 competition using the 16 historical document test images used in the contest. We are interested in the image quality assessment of the outputs generated by the different binarization algorithms as well as the OCR performance, where possible. We compare our evaluation of the algorithms based on human perception of quality to the DIBCO evaluation metrics. The results obtained provide an insight into the effectiveness of these methods with respect to human perception of image quality as well as OCR performance.

  18. Iterative Assessment of Statistically-Oriented and Standard Algorithms for Determining Muscle Onset with Intramuscular Electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-12-01

    The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC (1,1) : 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.

  19. Fuzzy Logic-based Intelligent Scheme for Enhancing QoS of Vertical Handover Decision in Vehicular Ad-hoc Networks

    NASA Astrophysics Data System (ADS)

    Azzali, F.; Ghazali, O.; Omar, M. H.

    2017-08-01

    The design of next generation networks in various technologies under the “Anywhere, Anytime” paradigm offers seamless connectivity across different coverage. A conventional algorithm such as RSSThreshold algorithm, that only uses the received strength signal (RSS) as a metric, will decrease handover performance regarding handover latency, delay, packet loss, and handover failure probability. Moreover, the RSS-based algorithm is only suitable for horizontal handover decision to examine the quality of service (QoS) compared to the vertical handover decision in advanced technologies. In the next generation network, vertical handover can be started based on the user’s convenience or choice rather than connectivity reasons. This study proposes a vertical handover decision algorithm that uses a Fuzzy Logic (FL) algorithm, to increase QoS performance in heterogeneous vehicular ad-hoc networks (VANET). The study uses network simulator 2.29 (NS 2.29) along with the mobility traffic network and generator to implement simulation scenarios and topologies. This helps the simulation to achieve a realistic VANET mobility scenario. The required analysis on the performance of QoS in the vertical handover can thus be conducted. The proposed Fuzzy Logic algorithm shows improvement over the conventional algorithm (RSSThreshold) in the average percentage of handover QoS whereby it achieves 20%, 21% and 13% improvement on handover latency, delay, and packet loss respectively. This is achieved through triggering a process in layer two and three that enhances the handover performance.

  20. Normalized distance aggregation of discriminative features for person reidentification

    NASA Astrophysics Data System (ADS)

    Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan

    2018-03-01

    We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).

  1. Evaluating which plan quality metrics are appropriate for use in lung SBRT.

    PubMed

    Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A

    2018-02-01

    Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p < 0.0001). Gradient measures strongly correlated with target volume (p < 0.0001). The RTOG lung SBRT protocol advocated conformity guidelines for prescribed dose in all categories were met in ≥94% of cases. The proportion of total lung volume receiving doses of 20 Gy and 5 Gy (V 20 and V 5 ) were mean 4.8% (±3.2) and 16.4% (±9.2), respectively. Based on our study analyses, we recommend the following metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.

  2. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  3. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.

    PubMed

    Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H

    2012-09-26

    Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.

  4. Pure field theories and MACSYMA algorithms

    NASA Technical Reports Server (NTRS)

    Ament, W. S.

    1977-01-01

    A pure field theory attempts to describe physical phenomena through singularity-free solutions of field equations resulting from an action principle. The physics goes into forming the action principle and interpreting specific results. Algorithms for the intervening mathematical steps are sketched. Vacuum general relativity is a pure field theory, serving as model and providing checks for generalizations. The fields of general relativity are the 10 components of a symmetric Riemannian metric tensor; those of the Einstein-Straus generalization are the 16 components of a nonsymmetric. Algebraic properties are exploited in top level MACSYMA commands toward performing some of the algorithms of that generalization. The light cone for the theory as left by Einstein and Straus is found and simplifications of that theory are discussed.

  5. Exploring s-CIELAB as a scanner metric for print uniformity

    NASA Astrophysics Data System (ADS)

    Hertel, Dirk W.

    2005-01-01

    The s-CIELAB color difference metric combines the standard CIELAB metric for perceived color difference with spatial contrast sensitivity filtering. When studying the performance of digital image processing algorithms, maps of spatial color difference between 'before' and 'after' images are a measure of perceived image difference. A general image quality metric can be obtained by modeling the perceived difference from an ideal image. This paper explores the s-CIELAB concept for evaluating the quality of digital prints. Prints present the challenge that the 'ideal print' which should serve as the reference when calculating the delta E* error map is unknown, and thus be estimated from the scanned print. A reasonable estimate of what the ideal print 'should have been' is possible at least for images of known content such as flat fields or continuous wedges, where the error map can be calculated against a global or local mean. While such maps showing the perceived error at each pixel are extremely useful when analyzing print defects, it is desirable to statistically reduce them to a more manageable dataset. Examples of digital print uniformity are given, and the effect of specific print defects on the s-CIELAB delta E* metric are discussed.

  6. Voxel-based statistical analysis of uncertainties associated with deformable image registration

    NASA Astrophysics Data System (ADS)

    Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang

    2013-09-01

    Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.

  7. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data.

    PubMed

    Schultz, Elise V; Schultz, Christopher J; Carey, Lawrence D; Cecil, Daniel J; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  8. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    NASA Technical Reports Server (NTRS)

    Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  9. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    PubMed Central

    SCHULTZ, ELISE V.; SCHULTZ, CHRISTOPHER J.; CAREY, LAWRENCE D.; CECIL, DANIEL J.; BATEMAN, MONTE

    2017-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system’s performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system’s performance is evaluated with adjustments to parameter sensitivity. The system’s performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system’s performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system. PMID:29303164

  10. PONS2train: tool for testing the MLP architecture and local traning methods for runoff forecast

    NASA Astrophysics Data System (ADS)

    Maca, P.; Pavlasek, J.; Pech, P.

    2012-04-01

    The purpose of presented poster is to introduce the PONS2train developed for runoff prediction via multilayer perceptron - MLP. The software application enables the implementation of 12 different MLP's transfer functions, comparison of 9 local training algorithms and finally the evaluation the MLP performance via 17 selected model evaluation metrics. The PONS2train software is written in C++ programing language. Its implementation consists of 4 classes. The NEURAL_NET and NEURON classes implement the MLP, the CRITERIA class estimates model evaluation metrics and for model performance evaluation via testing and validation datasets. The DATA_PATTERN class prepares the validation, testing and calibration datasets. The software application uses the LAPACK, BLAS and ARMADILLO C++ linear algebra libraries. The PONS2train implements the first order local optimization algorithms: standard on-line and batch back-propagation with learning rate combined with momentum and its variants with the regularization term, Rprop and standard batch back-propagation with variable momentum and learning rate. The second order local training algorithms represents: the Levenberg-Marquardt algorithm with and without regularization and four variants of scaled conjugate gradients. The other important PONS2train features are: the multi-run, the weight saturation control, early stopping of trainings, and the MLP weights analysis. The weights initialization is done via two different methods: random sampling from uniform distribution on open interval or Nguyen Widrow method. The data patterns can be transformed via linear and nonlinear transformation. The runoff forecast case study focuses on PONS2train implementation and shows the different aspects of the MLP training, the MLP architecture estimation, the neural network weights analysis and model uncertainty estimation.

  11. Clustered-dot halftoning with direct binary search.

    PubMed

    Goyal, Puneet; Gupta, Madhur; Staelin, Carl; Fischer, Mani; Shacham, Omri; Allebach, Jan P

    2013-02-01

    In this paper, we present a new algorithm for aperiodic clustered-dot halftoning based on direct binary search (DBS). The DBS optimization framework has been modified for designing clustered-dot texture, by using filters with different sizes in the initialization and update steps of the algorithm. Following an intuitive explanation of how the clustered-dot texture results from this modified framework, we derive a closed-form cost metric which, when minimized, equivalently generates stochastic clustered-dot texture. An analysis of the cost metric and its influence on the texture quality is presented, which is followed by a modification to the cost metric to reduce computational cost and to make it more suitable for screen design.

  12. Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering

    NASA Astrophysics Data System (ADS)

    Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier

    2012-01-01

    We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.

  13. A fast parallel clustering algorithm for molecular simulation trajectories.

    PubMed

    Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui

    2013-01-15

    We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  14. Methods to Calculate the Heat Index as an Exposure Metric in Environmental Health Research

    PubMed Central

    Bell, Michelle L.; Peng, Roger D.

    2013-01-01

    Background: Environmental health research employs a variety of metrics to measure heat exposure, both to directly study the health effects of outdoor temperature and to control for temperature in studies of other environmental exposures, including air pollution. To measure heat exposure, environmental health studies often use heat index, which incorporates both air temperature and moisture. However, the method of calculating heat index varies across environmental studies, which could mean that studies using different algorithms to calculate heat index may not be comparable. Objective and Methods: We investigated 21 separate heat index algorithms found in the literature to determine a) whether different algorithms generate heat index values that are consistent with the theoretical concepts of apparent temperature and b) whether different algorithms generate similar heat index values. Results: Although environmental studies differ in how they calculate heat index values, most studies’ heat index algorithms generate values consistent with apparent temperature. Additionally, most different algorithms generate closely correlated heat index values. However, a few algorithms are potentially problematic, especially in certain weather conditions (e.g., very low relative humidity, cold weather). To aid environmental health researchers, we have created open-source software in R to calculate the heat index using the U.S. National Weather Service’s algorithm. Conclusion: We identified 21 separate heat index algorithms used in environmental research. Our analysis demonstrated that methods to calculate heat index are inconsistent across studies. Careful choice of a heat index algorithm can help ensure reproducible and consistent environmental health research. Citation: Anderson GB, Bell ML, Peng RD. 2013. Methods to calculate the heat index as an exposure metric in environmental health research. Environ Health Perspect 121:1111–1119; http://dx.doi.org/10.1289/ehp.1206273 PMID:23934704

  15. Energy-Based Metrics for Arthroscopic Skills Assessment.

    PubMed

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  16. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  17. Citizen science: A new perspective to evaluate spatial patterns in hydrology.

    NASA Astrophysics Data System (ADS)

    Koch, J.; Stisen, S.

    2016-12-01

    Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning make humans often more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which is inevitable giving benefits such as speed and the possibility to automatize processes. This study highlights the integration of the generally underused human resource into hydrology. We established a citizen science project on the zooniverse platform entitled Pattern Perception. The aim is to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of a hydrological catchment model. In total, the turnout counts more than 2,800 users that provided over 46,000 classifications of 1,095 individual subjects within 64 days after the launch. Each subject displays simulated spatial patterns of land-surface variables of a baseline model and six modelling scenarios. The citizen science data discloses a numeric pattern similarity score for each of the scenarios with respect to the reference. We investigate the capability of a set of innovative statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide flexibility and auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric.

  18. Systematic study of source mask optimization and verification flows

    NASA Astrophysics Data System (ADS)

    Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi

    2012-06-01

    Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.

  19. Relative Risk Reduction as a Metric to Standardize Effect Size for Public Heath Interventions for Translational Research: Methods and Applications

    PubMed Central

    Mirzazadeh, A; Malekinejad, M; Kahn, JG

    2018-01-01

    Objective Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). Study Design and Setting This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of HIV Testing and Counseling (HTC). We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Results Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, e.g., consistent condom use) and undesirable (70.5% e.g., inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR, and provide the conceptual basis for advantages of RRR over other metrics. Conclusion Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underutilized by original and review studies. PMID:25726522

  20. Comparison of Algorithm-based Estimates of Occupational Diesel Exhaust Exposure to Those of Multiple Independent Raters in a Population-based Case–Control Study

    PubMed Central

    Friesen, Melissa C.

    2013-01-01

    Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the algorithm and the individual raters (κw = 0.58–0.81). For these metrics, the algorithm estimates had consistently higher agreement with the aggregate rating (κw = 0.82) than with the individual raters. For all metrics, the agreement between the algorithm and the aggregate ratings was highest for the unexposed category (90–93%) and was poor to moderate for the exposed categories (9–64%). Lower agreement was observed for jobs with a start year <1965 versus ≥1965. For the confidence metrics, the agreement was poor to moderate among raters (κw = 0.17–0.45) and between the algorithm and the individual raters (κw = 0.24–0.61). CART models identified patterns in the questionnaire responses that predicted a fair-to-moderate (33–89%) proportion of the disagreements between the raters’ and the algorithm estimates. Discussion: The agreement between any two raters was similar to the agreement between an algorithm-based approach and individual raters, providing additional support for using the more efficient and transparent algorithm-based approach. CART models identified some patterns in disagreements between the first rater and the algorithm. Given the absence of a gold standard for estimating exposure, these patterns can be reviewed by a team of exposure assessors to determine whether the algorithm should be revised for future studies. PMID:23184256

  1. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    PubMed

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  2. Whole brain diffeomorphic metric mapping via integration of sulcal and gyral curves, cortical surfaces, and images

    PubMed Central

    Du, Jia; Younes, Laurent; Qiu, Anqi

    2011-01-01

    This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722

  3. Development of a multimetric index for integrated assessment of salt marsh ecosystem condition

    USGS Publications Warehouse

    Nagel, Jessica L.; Neckles, Hilary A.; Guntenspergen, Glenn R.; Rocks, Erika N.; Schoolmaster, Donald; Grace, James B.; Skidds, Dennis; Stevens, Sara

    2018-01-01

    Tools for assessing and communicating salt marsh condition are essential to guide decisions aimed at maintaining or restoring ecosystem integrity and services. Multimetric indices (MMIs) are increasingly used to provide integrated assessments of ecosystem condition. We employed a theory-based approach that considers the multivariate relationship of metrics with human disturbance to construct a salt marsh MMI for five National Parks in the northeastern USA. We quantified the degree of human disturbance for each marsh using the first principal component score from a principal components analysis of physical, chemical, and land use stressors. We then applied a metric selection algorithm to different combinations of about 45 vegetation and nekton metrics (e.g., species abundance, species richness, and ecological and functional classifications) derived from multi-year monitoring data. While MMIs derived from nekton or vegetation metrics alone were strongly correlated with human disturbance (r values from −0.80 to −0.93), an MMI derived from both vegetation and nekton metrics yielded an exceptionally strong correlation with disturbance (r = −0.96). Individual MMIs included from one to five metrics. The metric-assembly algorithm yielded parsimonious MMIs that exhibit the greatest possible correlations with disturbance in a way that is objective, efficient, and reproducible.

  4. Assessment of economic status in trauma registries: A new algorithm for generating population-specific clustering-based models of economic status for time-constrained low-resource settings.

    PubMed

    Eyler, Lauren; Hubbard, Alan; Juillard, Catherine

    2016-10-01

    Low and middle-income countries (LMICs) and the world's poor bear a disproportionate share of the global burden of injury. Data regarding disparities in injury are vital to inform injury prevention and trauma systems strengthening interventions targeted towards vulnerable populations, but are limited in LMICs. We aim to facilitate injury disparities research by generating a standardized methodology for assessing economic status in resource-limited country trauma registries where complex metrics such as income, expenditures, and wealth index are infeasible to assess. To address this need, we developed a cluster analysis-based algorithm for generating simple population-specific metrics of economic status using nationally representative Demographic and Health Surveys (DHS) household assets data. For a limited number of variables, g, our algorithm performs weighted k-medoids clustering of the population using all combinations of g asset variables and selects the combination of variables and number of clusters that maximize average silhouette width (ASW). In simulated datasets containing both randomly distributed variables and "true" population clusters defined by correlated categorical variables, the algorithm selected the correct variable combination and appropriate cluster numbers unless variable correlation was very weak. When used with 2011 Cameroonian DHS data, our algorithm identified twenty economic clusters with ASW 0.80, indicating well-defined population clusters. This economic model for assessing health disparities will be used in the new Cameroonian six-hospital centralized trauma registry. By describing our standardized methodology and algorithm for generating economic clustering models, we aim to facilitate measurement of health disparities in other trauma registries in resource-limited countries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    NASA Astrophysics Data System (ADS)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  6. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  7. Expanding the use of administrative claims databases in conducting clinical real-world evidence studies in multiple sclerosis.

    PubMed

    Capkun, Gorana; Lahoz, Raquel; Verdun, Elisabetta; Song, Xue; Chen, Weston; Korn, Jonathan R; Dahlke, Frank; Freitas, Rita; Fraeman, Kathy; Simeone, Jason; Johnson, Barbara H; Nordstrom, Beth

    2015-05-01

    Administrative claims databases provide a wealth of data for assessing the effect of treatments in clinical practice. Our aim was to propose methodology for real-world studies in multiple sclerosis (MS) using these databases. In three large US administrative claims databases: MarketScan, PharMetrics Plus and Department of Defense (DoD), patients with MS were selected using an algorithm identified in the published literature and refined for accuracy. Algorithms for detecting newly diagnosed ('incident') MS cases were also refined and tested. Methodology based on resource and treatment use was developed to differentiate between relapses with and without hospitalization. When various patient selection criteria were applied to the MarketScan database, an algorithm requiring two MS diagnoses at least 30 days apart was identified as the preferred method of selecting patient cohorts. Attempts to detect incident MS cases were confounded by the limited continuous enrollment of patients in these databases. Relapse detection algorithms identified similar proportions of patients in the MarketScan and PharMetrics Plus databases experiencing relapses with (2% in both databases) and without (15-20%) hospitalization in the 1 year follow-up period, providing findings in the range of those in the published literature. Additional validation of the algorithms proposed here would increase their credibility. The methods suggested in this study offer a good foundation for performing real-world research in MS using administrative claims databases, potentially allowing evidence from different studies to be compared and combined more systematically than in current research practice.

  8. Network Sampling and Classification:An Investigation of Network Model Representations

    PubMed Central

    Airoldi, Edoardo M.; Bai, Xue; Carley, Kathleen M.

    2011-01-01

    Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of network metrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed. PMID:21666773

  9. Detection of periodicity based on independence tests - III. Phase distance correlation periodogram

    NASA Astrophysics Data System (ADS)

    Zucker, Shay

    2018-02-01

    I present the Phase Distance Correlation (PDC) periodogram - a new periodicity metric, based on the Distance Correlation concept of Gábor Székely. For each trial period, PDC calculates the distance correlation between the data samples and their phases. PDC requires adaptation of the Székely's distance correlation to circular variables (phases). The resulting periodicity metric is best suited to sparse data sets, and it performs better than other methods for sawtooth-like periodicities. These include Cepheid and RR-Lyrae light curves, as well as radial velocity curves of eccentric spectroscopic binaries. The performance of the PDC periodogram in other contexts is almost as good as that of the Generalized Lomb-Scargle periodogram. The concept of phase distance correlation can be adapted also to astrometric data, and it has the potential to be suitable also for large evenly spaced data sets, after some algorithmic perfection.

  10. Frontal Representation as a Metric of Model Performance

    NASA Astrophysics Data System (ADS)

    Douglass, E.; Mask, A. C.

    2017-12-01

    Representation of fronts detected by altimetry are used to evaluate the performance of the HYCOM global operational product. Fronts are detected and assessed in daily alongtrack altimetry. Then, modeled sea surface height is interpolated to the locations of the alongtrack observations, and the same frontal detection algorithm is applied to the interpolated model output. The percentage of fronts found in the altimetry and replicated in the model gives a score (0-100) that assesses the model's ability to replicate fronts in the proper location with the proper orientation. Further information can be obtained from determining the number of "extra" fronts found in the model but not in the altimetry, and from assessing the horizontal and vertical dimensions of the front in the model as compared to observations. Finally, the sensitivity of this metric to choices regarding the smoothing of noisy alongtrack altimetry observations, and to the minimum size of fronts being analyzed, is assessed.

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  12. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.

  13. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  14. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2013-05-01

    Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile...cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity metric. For comparative purposes, clustered heat maps included...non-mTBI cases were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity

  15. Optimal pattern distributions in Rete-based production systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1994-01-01

    Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.

  16. A novel fair active queue management algorithm based on traffic delay jitter

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  17. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  18. A fast hybrid algorithm combining regularized motion tracking and predictive search for reducing the occurrence of large displacement errors.

    PubMed

    Jiang, Jingfeng; Hall, Timothy J

    2011-04-01

    A hybrid approach that inherits both the robustness of the regularized motion tracking approach and the efficiency of the predictive search approach is reported. The basic idea is to use regularized speckle tracking to obtain high-quality seeds in an explorative search that can be used in the subsequent intelligent predictive search. The performance of the hybrid speckle-tracking algorithm was compared with three published speckle-tracking methods using in vivo breast lesion data. We found that the hybrid algorithm provided higher displacement quality metric values, lower root mean squared errors compared with a locally smoothed displacement field, and higher improvement ratios compared with the classic block-matching algorithm. On the basis of these comparisons, we concluded that the hybrid method can further enhance the accuracy of speckle tracking compared with its real-time counterparts, at the expense of slightly higher computational demands. © 2011 IEEE

  19. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  20. Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection

    PubMed Central

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-01-01

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505

  1. Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.

    PubMed

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-10-27

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.

  2. Observability-Based Guidance and Sensor Placement

    NASA Astrophysics Data System (ADS)

    Hinson, Brian T.

    Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.

  3. Mental Health Risk Adjustment with Clinical Categories and Machine Learning.

    PubMed

    Shrestha, Akritee; Bergquist, Savannah; Montz, Ellen; Rose, Sherri

    2017-12-15

    To propose nonparametric ensemble machine learning for mental health and substance use disorders (MHSUD) spending risk adjustment formulas, including considering Clinical Classification Software (CCS) categories as diagnostic covariates over the commonly used Hierarchical Condition Category (HCC) system. 2012-2013 Truven MarketScan database. We implement 21 algorithms to predict MHSUD spending, as well as a weighted combination of these algorithms called super learning. The algorithm collection included seven unique algorithms that were supplied with three differing sets of MHSUD-related predictors alongside demographic covariates: HCC, CCS, and HCC + CCS diagnostic variables. Performance was evaluated based on cross-validated R 2 and predictive ratios. Results show that super learning had the best performance based on both metrics. The top single algorithm was random forests, which improved on ordinary least squares regression by 10 percent with respect to relative efficiency. CCS categories-based formulas were generally more predictive of MHSUD spending compared to HCC-based formulas. Literature supports the potential benefit of implementing a separate MHSUD spending risk adjustment formula. Our results suggest there is an incentive to explore machine learning for MHSUD-specific risk adjustment, as well as considering CCS categories over HCCs. © Health Research and Educational Trust.

  4. Passive autonomous infrared sensor technology

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz

    1987-10-01

    This study was conducted in response to the DoD's need for establishing understanding of algorithm's modules for passive infrared sensors and seekers and establishing a standardized systematic procedure for applying this understanding to DoD applications. We quantified the performances of Honeywell's Background Adaptive Convexity Operator Region Extractor (BACORE) detection and segmentation modules, as functions of a set of image metrics for both single-frame and multiframe processing. We established an understanding of the behavior of the BACORE's internal parameters. We characterized several sets of stationary and sequential imagery and extracted TIR squared, TBIR squared, ESR, and range for each target. We generated a set of performance models for multi-frame processing BACORE that could be used to predict the behavior of BACORE in image metric space. A similar study was conducted for another of Honeywell's segmentors, namely Texture Boundary Locator (TBL), and its performances were quantified. Finally, a comparison of TBL and BACORE on the same data base and same number of frames was made.

  5. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  6. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    PubMed Central

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358

  7. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    PubMed

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.

  8. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2018-01-01

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  9. Evaluation of a hyperspectral image database for demosaicking purposes

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Süsstrunk, Sabine

    2011-01-01

    We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.

  10. Parameter Search Algorithms for Microwave Radar-Based Breast Imaging: Focal Quality Metrics as Fitness Functions.

    PubMed

    O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin

    2017-12-06

    Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

  11. First International Diagnosis Competition - DXC'09

    NASA Technical Reports Server (NTRS)

    Kurtoglu, tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC 09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC 09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results.

  12. EEG amplitude modulation analysis for semi-automated diagnosis of Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Falk, Tiago H.; Fraga, Francisco J.; Trambaiolli, Lucas; Anghinah, Renato

    2012-12-01

    Recent experimental evidence has suggested a neuromodulatory deficit in Alzheimer's disease (AD). In this paper, we present a new electroencephalogram (EEG) based metric to quantitatively characterize neuromodulatory activity. More specifically, the short-term EEG amplitude modulation rate-of-change (i.e., modulation frequency) is computed for five EEG subband signals. To test the performance of the proposed metric, a classification task was performed on a database of 32 participants partitioned into three groups of approximately equal size: healthy controls, patients diagnosed with mild AD, and those with moderate-to-severe AD. To gauge the benefits of the proposed metric, performance results were compared with those obtained using EEG spectral peak parameters which were recently shown to outperform other conventional EEG measures. Using a simple feature selection algorithm based on area-under-the-curve maximization and a support vector machine classifier, the proposed parameters resulted in accuracy gains, relative to spectral peak parameters, of 21.3% when discriminating between the three groups and by 50% when mild and moderate-to-severe groups were merged into one. The preliminary findings reported herein provide promising insights that automated tools may be developed to assist physicians in very early diagnosis of AD as well as provide researchers with a tool to automatically characterize cross-frequency interactions and their changes with disease.

  13. A support vector machine for predicting defibrillation outcomes from waveform metrics.

    PubMed

    Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J

    2014-03-01

    Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  15. Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam

    2016-10-01

    In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.

  16. Scheduler Design Criteria: Requirements and Considerations

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.

  17. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  18. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    PubMed Central

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  19. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  20. Geographically Modified PageRank Algorithms: Identifying the Spatial Concentration of Human Movement in a Geospatial Network

    PubMed Central

    2015-01-01

    A network approach, which simplifies geographic settings as a form of nodes and links, emphasizes the connectivity and relationships of spatial features. Topological networks of spatial features are used to explore geographical connectivity and structures. The PageRank algorithm, a network metric, is often used to help identify important locations where people or automobiles concentrate in the geographical literature. However, geographic considerations, including proximity and location attractiveness, are ignored in most network metrics. The objective of the present study is to propose two geographically modified PageRank algorithms—Distance-Decay PageRank (DDPR) and Geographical PageRank (GPR)—that incorporate geographic considerations into PageRank algorithms to identify the spatial concentration of human movement in a geospatial network. Our findings indicate that in both intercity and within-city settings the proposed algorithms more effectively capture the spatial locations where people reside than traditional commonly-used network metrics. In comparing location attractiveness and distance decay, we conclude that the concentration of human movement is largely determined by the distance decay. This implies that geographic proximity remains a key factor in human mobility. PMID:26437000

  1. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  2. Developing and evaluating a target-background similarity metric for camouflage detection.

    PubMed

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.

  3. Trajectory Guidance for Mars Robotic Precursors: Aerocapture, Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Sostaric, Ronald R.; Zumwalt, Carlie; Garcia-Llama, Eduardo; Powell, Richard; Shidner, Jeremy

    2011-01-01

    Future crewed missions to Mars require improvements in landed mass capability beyond that which is possible using state-of-the-art Mars Entry, Descent, and Landing (EDL) systems. Current systems are capable of an estimated maximum landed mass of 1-1.5 metric tons (MT), while human Mars studies require 20-40 MT. A set of technologies were investigated by the EDL Systems Analysis (SA) project to assess the performance of candidate EDL architectures. A single architecture was selected for the design of a robotic precursor mission, entitled Exploration Feed Forward (EFF), whose objective is to demonstrate these technologies. In particular, inflatable aerodynamic decelerators (IADs) and supersonic retro-propulsion (SRP) have been shown to have the greatest mass benefit and extensibility to future exploration missions. In order to evaluate these technologies and develop the mission, candidate guidance algorithms have been coded into the simulation for the purposes of studying system performance. These guidance algorithms include aerocapture, entry, and powered descent. The performance of the algorithms for each of these phases in the presence of dispersions has been assessed using a Monte Carlo technique.

  4. An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education

    PubMed Central

    Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges

    2017-01-01

    The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268

  5. Thermodynamic cost of computation, algorithmic complexity and the information metric

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  6. Task-based detectability comparison of exponential transformation of free-response operating characteristic (EFROC) curve and channelized Hotelling observer (CHO)

    NASA Astrophysics Data System (ADS)

    Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly

    2016-03-01

    This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.

  7. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    PubMed

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  8. Sweet-spot training for early esophageal cancer detection

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.

    2016-03-01

    Over the past decade, the imaging tools for endoscopists have improved drastically. This has enabled physicians to visually inspect the intestinal tissue for early signs of malignant lesions. Besides this, recent studies show the feasibility of supportive image analysis for endoscopists, but the analysis problem is typically approached as a segmentation task where binary ground truth is employed. In this study, we show that the detection of early cancerous tissue in the gastrointestinal tract cannot be approached as a binary segmentation problem and it is crucial and clinically relevant to involve multiple experts for annotating early lesions. By employing the so-called sweet spot for training purposes as a metric, a much better detection performance can be achieved. Furthermore, a multi-expert-based ground truth, i.e. a golden standard, enables an improved validation of the resulting delineations. For this purpose, besides the sweet spot we also propose another novel metric, the Jaccard Golden Standard (JIGS) that can handle multiple ground-truth annotations. Our experiments involving these new metrics and based on the golden standard show that the performance of a detection algorithm of early neoplastic lesions in Barrett's esophagus can be increased significantly, demonstrating a 10 percent point increase in the resulting F1 detection score.

  9. Multiview marker-free registration of forest terrestrial laser scanner data with embedded confidence metrics

    DOE PAGES

    Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan; ...

    2016-10-18

    Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less

  10. A graph based algorithm for adaptable dynamic airspace configuration for NextGen

    NASA Astrophysics Data System (ADS)

    Savai, Mehernaz P.

    The National Airspace System (NAS) is a complicated large-scale aviation network, consisting of many static sectors wherein each sector is controlled by one or more controllers. The main purpose of the NAS is to enable safe and prompt air travel in the U.S. However, such static configuration of sectors will not be able to handle the continued growth of air travel which is projected to be more than double the current traffic by 2025. Under the initiative of the Next Generation of Air Transportation system (NextGen), the main objective of Adaptable Dynamic Airspace Configuration (ADAC) is that the sectors should change to the changing traffic so as to reduce the controller workload variance with time while increasing the throughput. Change in the resectorization should be such that there is a minimal increase in exchange of air traffic among controllers. The benefit of a new design (improvement in workload balance, etc.) should sufficiently exceed the transition cost, in order to deserve a change. This leads to the analysis of the concept of transition workload which is the cost associated with a transition from one sectorization to another. Given two airspace configurations, a transition workload metric which considers the air traffic as well as the geometry of the airspace is proposed. A solution to reduce this transition workload is also discussed. The algorithm is specifically designed to be implemented for the Dynamic Airspace Configuration (DAC) Algorithm. A graph model which accurately represents the air route structure and air traffic in the NAS is used to formulate the airspace configuration problem. In addition, a multilevel graph partitioning algorithm is developed for Dynamic Airspace Configuration which partitions the graph model of airspace with given user defined constraints and hence provides the user more flexibility and control over various partitions. In terms of air traffic management, vertices represent airports and waypoints. Some of the major (busy) airports need to be given more importance and hence treated separately. Thus the algorithm takes into account the air route structure while finding a balance between sector workloads. The performance of the proposed algorithms and performance metrics is validated with the Enhanced Traffic Management System (ETMS) air traffic data.

  11. Evaluating motion processing algorithms for use with functional near-infrared spectroscopy data from young children.

    PubMed

    Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P

    2018-04-01

    Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.

  12. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy.

    PubMed

    De Silva, Tharindu; Fenster, Aaron; Cool, Derek W; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D

    2013-02-01

    Three-dimensional (3D) transrectal ultrasound (TRUS)-guided systems have been developed to improve targeting accuracy during prostate biopsy. However, prostate motion during the procedure is a potential source of error that can cause target misalignments. The authors present an image-based registration technique to compensate for prostate motion by registering the live two-dimensional (2D) TRUS images acquired during the biopsy procedure to a preacquired 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. The authors implemented an intensity-based 2D-3D rigid registration algorithm optimizing the normalized cross-correlation (NCC) metric using Powell's method. The 2D TRUS images acquired during the procedure prior to biopsy gun firing were registered to the baseline 3D TRUS image acquired at the beginning of the procedure. The accuracy was measured by calculating the target registration error (TRE) using manually identified fiducials within the prostate; these fiducials were used for validation only and were not provided as inputs to the registration algorithm. They also evaluated the accuracy when the registrations were performed continuously throughout the biopsy by acquiring and registering live 2D TRUS images every second. This measured the improvement in accuracy resulting from performing the registration, continuously compensating for motion during the procedure. To further validate the method using a more challenging data set, registrations were performed using 3D TRUS images acquired by intentionally exerting different levels of ultrasound probe pressures in order to measure the performance of our algorithm when the prostate tissue was intentionally deformed. In this data set, biopsy scenarios were simulated by extracting 2D frames from the 3D TRUS images and registering them to the baseline 3D image. A graphics processing unit (GPU)-based implementation was used to improve the registration speed. They also studied the correlation between NCC and TREs. The root-mean-square (RMS) TRE of registrations performed prior to biopsy gun firing was found to be 1.87 ± 0.81 mm. This was an improvement over 4.75 ± 2.62 mm before registration. When the registrations were performed every second during the biopsy, the RMS TRE was reduced to 1.63 ± 0.51 mm. For 3D data sets acquired under different probe pressures, the RMS TRE was found to be 3.18 ± 1.6 mm. This was an improvement from 6.89 ± 4.1 mm before registration. With the GPU based implementation, the registrations were performed with a mean time of 1.1 s. The TRE showed a weak correlation with the similarity metric. However, the authors measured a generally convex shape of the metric around the ground truth, which may explain the rapid convergence of their algorithm to accurate results. Registration to compensate for prostate motion during 3D TRUS-guided biopsy can be performed with a measured accuracy of less than 2 mm and a speed of 1.1 s, which is an important step toward improving the targeting accuracy of a 3D TRUS-guided biopsy system.

  13. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  14. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    PubMed Central

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-01-01

    This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474

  15. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    PubMed

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-02-23

    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  16. Darwinian Spacecraft: Soft Computing Strategies Breeding Better, Faster Cheaper

    NASA Technical Reports Server (NTRS)

    Noever, David A.; Baskaran, Subbiah

    1999-01-01

    Computers can create infinite lists of combinations to try to solve a particular problem, a process called "soft-computing." This process uses statistical comparables, neural networks, genetic algorithms, fuzzy variables in uncertain environments, and flexible machine learning to create a system which will allow spacecraft to increase robustness, and metric evaluation. These concepts will allow for the development of a spacecraft which will allow missions to be performed at lower costs.

  17. Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches

    DTIC Science & Technology

    2014-09-01

    training phase and a fusion of the detector outputs. 6.3.1 Training Techniques 1. Bagging: The basic idea of Bagging is to train multiple classifiers...can reduce more noise interesting points. Person detection and background subtraction methods were used to create hot regions. The hot regions were...detection algorithms are incorporated with MHT to construct one integrated detector /tracker. 6.8 IRDS-CASIA team IRDS-CASIA proposed a method to solve a

  18. Adaptive Voting Algorithms for the Reliable Dissemination of Data in Fault-Prone Distributed Environments

    DTIC Science & Technology

    2008-01-01

    on Op. Sys. Principles, ACM SIGOPS, Brighton , UK , October. Pollack, S. and McQuay, W.K. (2005) ‘Joint battlespace infosphere applications using...the voting protocols for good performance while meeting the reliability requirements of data delivery in a high assurance setting. Two metric quantify...the effectiveness of voting protocols: Data Transfer Efficiency (DTE) and Time-to-Complete (TTC) data delivery . DTE captures the network bandwidth

  19. Modeling the effects of contrast enhancement on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Fanning, Jonathan D.

    2008-04-01

    Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.

  20. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  1. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    PubMed

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  2. Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.

    PubMed

    Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N

    2017-05-01

    Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies inherent to traditional consultation models, novel productivity metrics are proposed. Further research is needed to determine optimal metrics for monitoring productivity within PPC teams. Innovative approaches should be studied with the goal of improving efficiency of care without compromising value. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  3. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  4. An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.

    PubMed

    Yoon, Yourim; Kim, Yong-Hyuk

    2013-10-01

    Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

  5. Indexing Volumetric Shapes with Matching and Packing

    PubMed Central

    Koes, David Ryan; Camacho, Carlos J.

    2014-01-01

    We describe a novel algorithm for bulk-loading an index with high-dimensional data and apply it to the problem of volumetric shape matching. Our matching and packing algorithm is a general approach for packing data according to a similarity metric. First an approximate k-nearest neighbor graph is constructed using vantage-point initialization, an improvement to previous work that decreases construction time while improving the quality of approximation. Then graph matching is iteratively performed to pack related items closely together. The end result is a dense index with good performance. We define a new query specification for shape matching that uses minimum and maximum shape constraints to explicitly specify the spatial requirements of the desired shape. This specification provides a natural language for performing volumetric shape matching and is readily supported by the geometry-based similarity search (GSS) tree, an indexing structure that maintains explicit representations of volumetric shape. We describe our implementation of a GSS tree for volumetric shape matching and provide a comprehensive evaluation of parameter sensitivity, performance, and scalability. Compared to previous bulk-loading algorithms, we find that matching and packing can construct a GSS-tree index in the same amount of time that is denser, flatter, and better performing, with an observed average performance improvement of 2X. PMID:26085707

  6. Experimental evaluation of model predictive control and inverse dynamics control for spacecraft proximity and docking maneuvers

    NASA Astrophysics Data System (ADS)

    Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello

    2018-03-01

    An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.

  7. An improved real time image detection system for elephant intrusion along the forest border areas.

    PubMed

    Sugumar, S J; Jayaparvathy, R

    2014-01-01

    Human-elephant conflict is a major problem leading to crop damage, human death and injuries caused by elephants, and elephants being killed by humans. In this paper, we propose an automated unsupervised elephant image detection system (EIDS) as a solution to human-elephant conflict in the context of elephant conservation. The elephant's image is captured in the forest border areas and is sent to a base station via an RF network. The received image is decomposed using Haar wavelet to obtain multilevel wavelet coefficients, with which we perform image feature extraction and similarity match between the elephant query image and the database image using image vision algorithms. A GSM message is sent to the forest officials indicating that an elephant has been detected in the forest border and is approaching human habitat. We propose an optimized distance metric to improve the image retrieval time from the database. We compare the optimized distance metric with the popular Euclidean and Manhattan distance methods. The proposed optimized distance metric retrieves more images with lesser retrieval time than the other distance metrics which makes the optimized distance method more efficient and reliable.

  8. Learning a Novel Detection Metric for the Detection of O’Connell Effect Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    Johnston, Kyle; Haber, Rana; Knote, Matthew; Caballero-Nieves, Saida Maria; Peter, Adrian; Petit, Véronique

    2018-01-01

    With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Here we focus on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern detection algorithm for the targeted identification of eclipsing binaries which demonstrate a feature known as the O’Connell Effect. A methodology for the reduction of stellar variable observations (time-domain data) into Distribution Fields (DF) is presented. Push-Pull metric learning, a variant of LMNN learning, is used to generate a learned distance metric for the specific detection problem proposed. The metric will be trained on a set of a labelled Kepler eclipsing binary data, in particular systems showing the O’Connell effect. Performance estimates will be presented, as well the results of the detector applied to an unlabeled Kepler EB data set; this work is a crucial step in the upcoming era of big data from the next generation of big telescopes, such as LSST.

  9. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  10. Mobile/android application for QRS detection using zero cross method

    NASA Astrophysics Data System (ADS)

    Rizqyawan, M. I.; Simbolon, A. I.; Suhendra, M. A.; Amri, M. F.; Kusumandari, D. E.

    2018-03-01

    In automatic ECG signal processing, one of the main topics of research is QRS complex detection. Detecting correct QRS complex or R peak is important since it is used to measure several other ECG metrics. One of the robust methods for QRS detection is Zero Cross method. This method uses an addition of high-frequency signal and zero crossing count to detect QRS complex which has a low-frequency oscillation. This paper presents an application of QRS detection using Zero Cross algorithm in the Android-based system. The performance of the algorithm in the mobile environment is measured. The result shows that this method is suitable for real-time QRS detection in a mobile application.

  11. Directory of Design Support Methods

    DTIC Science & Technology

    2005-08-01

    16 ATB3I 18 Auditory Hazard Assessment Algorithm (AHAAH) 20 Authoring Instructional Materials (AIM) 22 Automated Neuropsychological Assessment...How To Acquire: Point of Contact listed above. 23 Title: Automated Neuropsychological Assessment Metrics (ANAM) Overall Category: Tool...General Overview: The Automated Neuropsychological Assessment Metrics (ANAM) is designed with emphasis on

  12. Was Euclid an Unnecessarily Sophisticated Psychologist?

    ERIC Educational Resources Information Center

    Arabie, Phipps

    1991-01-01

    The current state of multidimensional scaling using the city-block metric is reviewed, with attention to (1) substantive and theoretical issues; (2) recent algorithmic developments and their implications for analysis; (3) isometries with other metrics; (4) links to graph-theoretic models; and (5) prospects for future development. (SLD)

  13. Enhancement of breast periphery region in digital mammography

    NASA Astrophysics Data System (ADS)

    Menegatti Pavan, Ana Luiza; Vacavant, Antoine; Petean Trindade, Andre; Quini, Caio Cesar; Rodrigues de Pina, Diana

    2018-03-01

    Volumetric breast density has been shown to be one of the strongest risk factor for breast cancer diagnosis. This metric can be estimated using digital mammograms. During mammography acquisition, breast is compressed and part of it loses contact with the paddle, resulting in an uncompressed region in periphery with thickness variation. Therefore, reliable density estimation in the breast periphery region is a problem, which affects the accuracy of volumetric breast density measurement. The aim of this study was to enhance breast periphery to solve the problem of thickness variation. Herein, we present an automatic algorithm to correct breast periphery thickness without changing pixel value from internal breast region. The correction pixel values from periphery was based on mean values over iso-distance lines from the breast skin-line using only adipose tissue information. The algorithm detects automatically the periphery region where thickness should be corrected. A correction factor was applied in breast periphery image to enhance the region. We also compare our contribution with two other algorithms from state-of-the-art, and we show its accuracy by means of different quality measures. Experienced radiologists subjectively evaluated resulting images from the tree methods in relation to original mammogram. The mean pixel value, skewness and kurtosis from histogram of the three methods were used as comparison metric. As a result, the methodology presented herein showed to be a good approach to be performed before calculating volumetric breast density.

  14. Classification of complex information: inference of co-occurring affective states from their expressions in speech.

    PubMed

    Sobol-Shikler, Tal; Robinson, Peter

    2010-07-01

    We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance.

  15. Riemannian Metric Optimization on Surfaces (RMOS) for Intrinsic Brain Mapping in the Laplace-Beltrami Embedding Space

    PubMed Central

    Gahm, Jin Kyu; Shi, Yonggang

    2018-01-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399

  16. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  17. Common world model for unmanned systems: Phase 2

    NASA Astrophysics Data System (ADS)

    Dean, Robert M. S.; Oh, Jean; Vinokurov, Jerry

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of "Phase 2" of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.

  18. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2012-05-01

    UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left

  19. Validation of an automated seizure detection algorithm for term neonates

    PubMed Central

    Mathieson, Sean R.; Stevenson, Nathan J.; Low, Evonne; Marnane, William P.; Rennie, Janet M.; Temko, Andrey; Lightbody, Gordon; Boylan, Geraldine B.

    2016-01-01

    Objective The objective of this study was to validate the performance of a seizure detection algorithm (SDA) developed by our group, on previously unseen, prolonged, unedited EEG recordings from 70 babies from 2 centres. Methods EEGs of 70 babies (35 seizure, 35 non-seizure) were annotated for seizures by experts as the gold standard. The SDA was tested on the EEGs at a range of sensitivity settings. Annotations from the expert and SDA were compared using event and epoch based metrics. The effect of seizure duration on SDA performance was also analysed. Results Between sensitivity settings of 0.5 and 0.3, the algorithm achieved seizure detection rates of 52.6–75.0%, with false detection (FD) rates of 0.04–0.36 FD/h for event based analysis, which was deemed to be acceptable in a clinical environment. Time based comparison of expert and SDA annotations using Cohen’s Kappa Index revealed a best performing SDA threshold of 0.4 (Kappa 0.630). The SDA showed improved detection performance with longer seizures. Conclusion The SDA achieved promising performance and warrants further testing in a live clinical evaluation. Significance The SDA has the potential to improve seizure detection and provide a robust tool for comparing treatment regimens. PMID:26055336

  20. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  1. Sparse Contextual Activation for Efficient Visual Re-Ranking.

    PubMed

    Bai, Song; Bai, Xiang

    2016-03-01

    In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.

  2. A Near-Optimal Distributed QoS Constrained Routing Algorithm for Multichannel Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen

    2013-01-01

    One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.

  3. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    NASA Astrophysics Data System (ADS)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress component in SEB models for the improvement of their performance, especially under too dry or wet environments.

  4. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  5. Evaluation of motion artifact metrics for coronary CT angiography.

    PubMed

    Ma, Hongfeng; Gros, Eric; Szabo, Aniko; Baginski, Scott G; Laste, Zachary R; Kulkarni, Naveen M; Okerlund, Darin; Schmidt, Taly G

    2018-02-01

    This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best-phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground-truth motion artifact scores from a series of pairwise comparisons. Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low-Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine-filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground-truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground-truth reader score. The Kendall's Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. On phantom images, the Kendall's Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall's Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall's Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall's Tau coefficient of 0.65. The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images. © 2017 American Association of Physicists in Medicine.

  6. A hybrid clustering and classification approach for predicting crash injury severity on rural roads.

    PubMed

    Hasheminejad, Seyed Hessam-Allah; Zahedi, Mohsen; Hasheminejad, Seyed Mohammad Hossein

    2018-03-01

    As a threat for transportation system, traffic crashes have a wide range of social consequences for governments. Traffic crashes are increasing in developing countries and Iran as a developing country is not immune from this risk. There are several researches in the literature to predict traffic crash severity based on artificial neural networks (ANNs), support vector machines and decision trees. This paper attempts to investigate the crash injury severity of rural roads by using a hybrid clustering and classification approach to compare the performance of classification algorithms before and after applying the clustering. In this paper, a novel rule-based genetic algorithm (GA) is proposed to predict crash injury severity, which is evaluated by performance criteria in comparison with classification algorithms like ANN. The results obtained from analysis of 13,673 crashes (5600 property damage, 778 fatal crashes, 4690 slight injuries and 2605 severe injuries) on rural roads in Tehran Province of Iran during 2011-2013 revealed that the proposed GA method outperforms other classification algorithms based on classification metrics like precision (86%), recall (88%) and accuracy (87%). Moreover, the proposed GA method has the highest level of interpretation, is easy to understand and provides feedback to analysts.

  7. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  8. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  9. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2018-01-29

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  10. Bridge health monitoring metrics : updating the bridge deficiency algorithm.

    DOT National Transportation Integrated Search

    2009-10-01

    As part of its bridge management system, the Alabama Department of Transportation (ALDOT) must decide how best to spend its bridge replacement funds. In making these decisions, ALDOT managers currently use a deficiency algorithm to rank bridges that ...

  11. Evaluation of SEBS, SEBAL, and METRIC models in estimation of the evaporation from the freshwater lakes (Case study: Amirkabir dam, Iran)

    NASA Astrophysics Data System (ADS)

    Zamani Losgedaragh, Saeideh; Rahimzadegan, Majid

    2018-06-01

    Evapotranspiration (ET) estimation is of great importance due to its key role in water resource management. Surface energy modeling tools such as Surface Energy Balance Algorithm for Land (SEBAL), Mapping Evapotranspiration with Internalized Calibration (METRIC), and the Surface Energy Balance System (SEBS) can estimate the amount of evapotranspiration for every pixel of the satellite images. The main objective of this research is evaporation investigation from the freshwater bodies using SEBAL, METRIC, and SEBS. For this purpose, the Amirkabir dam reservoir and its nearby agricultural lands in a semi-arid climate were selected and studied from 2011 to 2017 as the study area. The implementations of this study were accomplished on 16 satellite images of Landsat TM5 and OLI. Then, SEBAL, METRIC, and SEBS were implemented on the selected images. Moreover, the corresponding pan evaporate measurements on the reservoir bank were considered as the ground truth data. Regarding to the results, SEBAL is not a reliable method to evaluate freshwater evaporation with the coefficient of determination (R2) of 0.36 and the Root Mean Square Error (RMSE) of 5.1 mm. On the other hand, METRIC with RMSE and R2 of 0.57 and 2.02 mm and SEBS with RMSE and R2 of 0.93 and 0.62 demonstrated a relatively good performance.

  12. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  13. Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.

  14. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  15. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  16. An Elegant Sufficiency: Load-Aware Differentiated Scheduling of Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kettimuthu, Rajkumar; Vardoyan, Gayane; Agrawal, Gagan

    2015-11-15

    We investigate the file transfer scheduling problem, where transfers among different endpoints must be scheduled to maximize pertinent metrics. We propose two new algorithms that exploit the fact that the aggregate bandwidth obtained over a network or at a storage system tends to increase with the number of concurrent transfers—but only up to a certain limit. The first algorithm, SEAL, uses runtime information and data-driven models to approximate system load and adapt transfer schedules and concurrency so as to maximize performance while avoiding saturation. We implement this algorithm using GridFTP as the transfer protocol and evaluate it using real transfermore » logs in a production WAN environment. Results show that SEAL can improve average slowdowns and turnaround times by up to 25% and worst-case slowdown and turnaround times by up to 50%, compared with the best-performing baseline scheme. Our second algorithm, STEAL, further leverages user-supplied categorization of transfers as either “interactive” (requiring immediate processing) or “batch” (less time-critical). Results show that STEAL reduces the average slowdown of interactive transfers by 63% compared to the best-performing baseline and by 21% compared to SEAL. For batch transfers, compared to the best-performing baseline, STEAL improves by 18% the utilization of the bandwidth unused by interactive transfers. By elegantly ensuring a sufficient, but not excessive, allocation of concurrency to the right transfers, we significantly improve overall performance despite constraints.« less

  17. Model-based color halftoning using direct binary search.

    PubMed

    Agar, A Ufuk; Allebach, Jan P

    2005-12-01

    In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.

  18. All metrics are equal, but some metrics are more equal than others: A systematic search and review on the use of the term ‘metric’

    PubMed Central

    Prodinger, Birgit; Fellinghauer, Carolina Saskia; Tennant, Alan

    2018-01-01

    Objective To examine the use of the term ‘metric’ in health and social sciences’ literature, focusing on the interval scale implication of the term in Modern Test Theory (MTT). Materials and methods A systematic search and review on MTT studies including ‘metric’ or ‘interval scale’ was performed in the health and social sciences literature. The search was restricted to 2001–2005 and 2011–2015. A Text Mining algorithm was employed to operationalize the eligibility criteria and to explore the uses of ‘metric’. The paradigm of each included article (Rasch Measurement Theory (RMT), Item Response Theory (IRT) or both), as well as its type (Theoretical, Methodological, Teaching, Application, Miscellaneous) were determined. An inductive thematic analysis on the first three types was performed. Results 70.6% of the 1337 included articles were allocated to RMT, and 68.4% were application papers. Among the number of uses of ‘metric’, it was predominantly a synonym of ‘scale’; as adjective, it referred to measurement or quantification. Three incompatible themes ‘only RMT/all MTT/no MTT models can provide interval measures’ were identified, but ‘interval scale’ was considerably more mentioned in RMT than in IRT. Conclusion ‘Metric’ is used in many different ways, and there is no consensus on which MTT metric has interval scale properties. Nevertheless, when using the term ‘metric’, the authors should specify the level of the metric being used (ordinal, ordered, interval, ratio), and justify why according to them the metric is at that level. PMID:29509813

  19. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram.

    PubMed

    Hossain, Mohammad E; Jassim, Wissam A; Zilany, Muhammad S A

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants.

  20. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram

    PubMed Central

    Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160

  1. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  2. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation.

    PubMed

    Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D

    2018-02-22

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).

  3. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation

    PubMed Central

    Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.

    2018-01-01

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448

  4. Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection

    PubMed Central

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Background Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. Methodology In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. Significance The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. PMID:24498310

  5. Skin exposure to aliphatic polyisocyanates in the auto body repair and refinishing industry: III. A personal exposure algorithm.

    PubMed

    Liu, Youcheng; Stowe, Meredith H; Bello, Dhimiter; Sparer, Judy; Gore, Rebecca J; Cullen, Mark R; Redlich, Carrie A; Woskie, Susan R

    2009-01-01

    Isocyanate skin exposure may play an important role in sensitization and the development of isocyanate asthma, but such exposures are frequently intermittent and difficult to assess. Exposure metrics are needed to better estimate isocyanate skin exposures. The goal of this study was to develop a semiquantitative algorithm to estimate personal skin exposures in auto body shop workers using task-based skin exposure data and daily work diaries. The relationship between skin and respiratory exposure metrics was also evaluated. The development and results of respiratory exposure metrics were previously reported. Using the task-based data obtained with a colorimetric skin exposure indicator and a daily work diary, we developed a skin exposure algorithm to estimate a skin exposure index (SEI) for each worker. This algorithm considered the type of personal protective equipment (PPE) used, the percentage of skin area covered by PPE and skin exposures without and underneath the PPE. The SEI was summed across the day (daily SEI) and survey week (weekly average SEI) for each worker, compared among the job title categories and also compared with the respiratory exposure metrics. A total of 893 person-days was calculated for 232 workers (49 painters, 118 technicians and 65 office workers) from 33 auto body shops. The median (10th-90th percentile, maximum) daily SEI was 0 (0-0, 1.0), 0 (0-1.9, 4.8) and 1.6 (0-3.5, 6.1) and weekly average SEI was 0 (0-0.0, 0.7), 0.3 (0-1.6, 4.2) and 1.9 (0.4-3.0, 3.6) for office workers, technicians and painters, respectively, which were significantly different (P < 0.0001). The median (10th-90th percentile, maximum) daily SEI was 0 (0-2.4, 6.1) and weekly average SEI was 0.2 (0-2.3, 4.2) for all workers. A relatively weak positive Spearman correlation was found between daily SEI and time-weighted average (TWA) respiratory exposure metrics (microg NCO m(-3)) (r = 0.380, n = 893, P < 0.0001) and between weekly SEI and TWA respiratory exposure metrics (r = 0.482, n = 232, P < 0.0001). The skin exposure algorithm developed in this study provides task-based personal daily and weekly average skin exposure indices that are adjusted for the use of PPE. These skin exposure indices can be used to assess isocyanate exposure-response relationships.

  6. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    PubMed

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Leahy, R.M.

    A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less

  8. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  9. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  10. A dose error evaluation study for 4D dose calculations

    NASA Astrophysics Data System (ADS)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.

  11. A dose error evaluation study for 4D dose calculations.

    PubMed

    Milz, Stefan; Wilkens, Jan J; Ullrich, Wolfgang

    2014-11-07

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms.The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms.The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm; 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.

  12. An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei

    2013-06-01

    Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.

  13. Development and experimentation of LQR/APF guidance and control for autonomous proximity maneuvers of multiple spacecraft

    NASA Astrophysics Data System (ADS)

    Bevilacqua, R.; Lehmann, T.; Romano, M.

    2011-04-01

    This work introduces a novel control algorithm for close proximity multiple spacecraft autonomous maneuvers, based on hybrid linear quadratic regulator/artificial potential function (LQR/APF), for applications including autonomous docking, on-orbit assembly and spacecraft servicing. Both theoretical developments and experimental validation of the proposed approach are presented. Fuel consumption is sub-optimized in real-time through re-computation of the LQR at each sample time, while performing collision avoidance through the APF and a high level decisional logic. The underlying LQR/APF controller is integrated with a customized wall-following technique and a decisional logic, overcoming problems such as local minima. The algorithm is experimentally tested on a four spacecraft simulators test bed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School. The metrics to evaluate the control algorithm are: autonomy of the system in making decisions, successful completion of the maneuver, required time, and propellant consumption.

  14. A new mathematical modelling based shape extraction technique for Forensic Odontology.

    PubMed

    G, Jaffino; A, Banumathi; Gurunathan, Ulaganathan; B, Vijayakumari; J, Prabin Jose

    2017-04-01

    Forensic Odontology is a specific means for identifying a person in which deceased, and particularly in fatality incidents. The algorithm can be proposed to identify a person by comparing both postmortem (PM) and antemortem (AM) dental radiographs and photographs. This work aims to introduce a new mathematical algorithm for photographs in addition with radiographs. Isoperimetric graph partitioning method is used to extract the shape of dental images in forensic identification. Shape matching is done by comparing AM and PM dental images using both similarity and distance measures. Experimental results prove that the higher matching distance is observed by distance metric rather than similarity measures. The results of this algorithm show that a high hit rate is observed for distance based performance measures and it is well suited for forensic odontologist to identify a person. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia.

    PubMed

    Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Dencker Johansen, Mette; Christensen, Toke Folke; Tarnow, Lise; Christiansen, Jens Sandahl; Hejlesen, Ole

    2014-10-01

    The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration algorithm. The accuracy of the two algorithms was compared using four performance metrics. The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD) sample-based sensitivity for the hypoglycemic threshold of 70 mg/dL was 31% (33%) for the Guardian RT algorithm and 70% (33%) for the new algorithm. The mean (SD) sample-based specificity at the same hypoglycemic threshold was 95% (8%) for the Guardian RT algorithm and 90% (16%) for the new calibration algorithm. The sensitivity of the event-based hypoglycemia detection for the hypoglycemic threshold of 70 mg/dL was 61% for the Guardian RT calibration and 89% for the new calibration algorithm. Application of the new calibration caused one false-positive instance for the event-based hypoglycemia detection, whereas the Guardian RT caused no false-positive instances. The overestimation of plasma glucose by CGM was corrected from 33.2 mg/dL in the Guardian RT algorithm to 21.9 mg/dL in the new calibration algorithm. The results suggest that the new algorithm may reduce the inaccuracy of Guardian RT CGM system within the hypoglycemic range; however, data from a larger number of patients are required to compare the clinical reliability of the two algorithms.

  16. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  17. Accurate and diverse recommendations via eliminating redundant correlations

    NASA Astrophysics Data System (ADS)

    Zhou, Tao; Su, Ri-Qi; Liu, Run-Ran; Jiang, Luo-Luo; Wang, Bing-Hong; Zhang, Yi-Cheng

    2009-12-01

    In this paper, based on a weighted projection of a bipartite user-object network, we introduce a personalized recommendation algorithm, called network-based inference (NBI), which has higher accuracy than the classical algorithm, namely collaborative filtering. In NBI, the correlation resulting from a specific attribute may be repeatedly counted in the cumulative recommendations from different objects. By considering the higher order correlations, we design an improved algorithm that can, to some extent, eliminate the redundant correlations. We test our algorithm on two benchmark data sets, MovieLens and Netflix. Compared with NBI, the algorithmic accuracy, measured by the ranking score, can be further improved by 23 per cent for MovieLens and 22 per cent for Netflix. The present algorithm can even outperform the Latent Dirichlet Allocation algorithm, which requires much longer computational time. Furthermore, most previous studies considered the algorithmic accuracy only; in this paper, we argue that the diversity and popularity, as two significant criteria of algorithmic performance, should also be taken into account. With more or less the same accuracy, an algorithm giving higher diversity and lower popularity is more favorable. Numerical results show that the present algorithm can outperform the standard one simultaneously in all five adopted metrics: lower ranking score and higher precision for accuracy, larger Hamming distance and lower intra-similarity for diversity, as well as smaller average degree for popularity.

  18. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Robust estimation of adaptive tensors of curvature by tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  20. Quantum load balancing in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Hasanpour, M.; Shariat, S.; Barnaghi, P.; Hoseinitabatabaei, S. A.; Vahid, S.; Tafazolli, R.

    2017-06-01

    This paper presents a novel approach in targeting load balancing in ad hoc networks utilizing the properties of quantum game theory. This approach benefits from the instantaneous and information-less capability of entangled particles to synchronize the load balancing strategies in ad hoc networks. The quantum load balancing (QLB) algorithm proposed by this work is implemented on top of OLSR as the baseline routing protocol; its performance is analyzed against the baseline OLSR, and considerable gain is reported regarding some of the main QoS metrics such as delay and jitter. Furthermore, it is shown that QLB algorithm supports a solid stability gain in terms of throughput which stands a proof of concept for the load balancing properties of the proposed theory.

  1. The Optimizer Topology Characteristics in Seismic Hazards

    NASA Astrophysics Data System (ADS)

    Sengor, T.

    2015-12-01

    The characteristic data of the natural phenomena are questioned in a topological space approach to illuminate whether there is an algorithm behind them bringing the situation of physics of phenomena to optimized states even if they are hazards. The optimized code designing the hazard on a topological structure mashes the metric of the phenomena. The deviations in the metric of different phenomena push and/or pull the fold of the other suitable phenomena. For example if the metric of a specific phenomenon A fits to the metric of another specific phenomenon B after variation processes generated with the deviation of the metric of previous phenomenon A. Defining manifold processes covering the metric characteristics of each of every phenomenon is possible for all the physical events; i.e., natural hazards. There are suitable folds in those manifold groups so that each subfold fits to the metric characteristics of one of the natural hazard category at least. Some variation algorithms on those metric structures prepare a gauge effect bringing the long time stability of Earth for largely scaled periods. The realization of that stability depends on some specific conditions. These specific conditions are called optimized codes. The analytical basics of processes in topological structures are developed in [1]. The codes are generated according to the structures in [2]. Some optimized codes are derived related to the seismicity of NAF beginning from the quakes of the year 1999. References1. Taner SENGOR, "Topological theory and analytical configuration for a universal community model," Procedia- Social and Behavioral Sciences, Vol. 81, pp. 188-194, 28 June 2013, 2. Taner SENGOR, "Seismic-Climatic-Hazardous Events Estimation Processes via the Coupling Structures in Conserving Energy Topologies of the Earth," The 2014 AGU Fall Meeting, Abstract no.: 31374, ABD.

  2. Toward a standard for the evaluation of PET-Auto-Segmentation methods following the recommendations of AAPM task group No. 211: Requirements and implementation.

    PubMed

    Berthon, Beatrice; Spezi, Emiliano; Galavis, Paulina; Shepherd, Tony; Apte, Aditya; Hatt, Mathieu; Fayad, Hadi; De Bernardi, Elisabetta; Soffientini, Chiara D; Ross Schmidtlein, C; El Naqa, Issam; Jeraj, Robert; Lu, Wei; Das, Shiva; Zaidi, Habib; Mawlawi, Osama R; Visvikis, Dimitris; Lee, John A; Kirov, Assen S

    2017-08-01

    The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET-auto-segmentation (PET-AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM). The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET-AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET-AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform. A selection of clinical, physical, and simulated phantom data, including "best estimates" reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET-AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET-AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET-AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state-of-the art. PETASset provides a platform that allows standardizing the evaluation and comparison of different PET-AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET-AS methods and contribute with more evaluation datasets. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  4. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  5. WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization

    NASA Astrophysics Data System (ADS)

    Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry

    2018-01-01

    We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.

  6. SU-F-T-312: Identifying Distinct Radiation Therapy Plan Classes Through Multi-Dimensional Analysis of Plan Complexity Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, V; Labby, Z; Culberson, W

    Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control pointmore » in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar modulation characteristics. Measurements and simulations will better elucidate potential plan-class specific dosimetry correction factors.« less

  7. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  8. An Efficient Statistical Computation Technique for Health Care Big Data using R

    NASA Astrophysics Data System (ADS)

    Sushma Rani, N.; Srinivasa Rao, P., Dr; Parimala, P.

    2017-08-01

    Due to the changes in living conditions and other factors many critical health related problems are arising. The diagnosis of the problem at earlier stages will increase the chances of survival and fast recovery. This reduces the time of recovery and the cost associated for the treatment. One such medical related issue is cancer and breast cancer has been identified as the second leading cause of cancer death. If detected in the early stage it can be cured. Once a patient is detected with breast cancer tumor, it should be classified whether it is cancerous or non-cancerous. So the paper uses k-nearest neighbors(KNN) algorithm which is one of the simplest machine learning algorithms and is an instance-based learning algorithm to classify the data. Day-to -day new records are added which leds to increase in the data to be classified and this tends to be big data problem. The algorithm is implemented in R whichis the most popular platform applied to machine learning algorithms for statistical computing. Experimentation is conducted by using various classification evaluation metric onvarious values of k. The results show that the KNN algorithm out performes better than existing models.

  9. Mutual Information in Frequency and Its Application to Measure Cross-Frequency Coupling in Epilepsy

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Johnson, Don H.; Kalamangalam, Giridhar P.; Tandon, Nitin; Aazhang, Behnaam

    2018-06-01

    We define a metric, mutual information in frequency (MI-in-frequency), to detect and quantify the statistical dependence between different frequency components in the data, referred to as cross-frequency coupling and apply it to electrophysiological recordings from the brain to infer cross-frequency coupling. The current metrics used to quantify the cross-frequency coupling in neuroscience cannot detect if two frequency components in non-Gaussian brain recordings are statistically independent or not. Our MI-in-frequency metric, based on Shannon's mutual information between the Cramer's representation of stochastic processes, overcomes this shortcoming and can detect statistical dependence in frequency between non-Gaussian signals. We then describe two data-driven estimators of MI-in-frequency: one based on kernel density estimation and the other based on the nearest neighbor algorithm and validate their performance on simulated data. We then use MI-in-frequency to estimate mutual information between two data streams that are dependent across time, without making any parametric model assumptions. Finally, we use the MI-in- frequency metric to investigate the cross-frequency coupling in seizure onset zone from electrocorticographic recordings during seizures. The inferred cross-frequency coupling characteristics are essential to optimize the spatial and spectral parameters of electrical stimulation based treatments of epilepsy.

  10. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  11. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    PubMed

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Wireless sensor placement for structural monitoring using information-fusing firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Guang-Dong; Yi, Ting-Hua; Xie, Mei-Xi; Li, Hong-Nan

    2017-10-01

    Wireless sensor networks (WSNs) are promising technology in structural health monitoring (SHM) applications for their low cost and high efficiency. The limited wireless sensors and restricted power resources in WSNs highlight the significance of optimal wireless sensor placement (OWSP) during designing SHM systems to enable the most useful information to be captured and to achieve the longest network lifetime. This paper presents a holistic approach, including an optimization criterion and a solution algorithm, for optimally deploying self-organizing multi-hop WSNs on large-scale structures. The combination of information effectiveness represented by the modal independence and the network performance specified by the network connectivity and network lifetime is first formulated to evaluate the performance of wireless sensor configurations. Then, an information-fusing firefly algorithm (IFFA) is developed to solve the OWSP problem. The step sizes drawn from a Lévy distribution are adopted to drive fireflies toward brighter individuals. Following the movement with Lévy flights, information about the contributions of wireless sensors to the objective function as carried by the fireflies is fused and applied to move inferior wireless sensors to better locations. The reliability of the proposed approach is verified via a numerical example on a long-span suspension bridge. The results demonstrate that the evaluation criterion provides a good performance metric of wireless sensor configurations, and the IFFA outperforms the simple discrete firefly algorithm.

  13. A Simulation-Based Study on the Comparison of Statistical and Time Series Forecasting Methods for Early Detection of Infectious Disease Outbreaks.

    PubMed

    Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho

    2018-05-11

    Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.

  14. Texture generation for use in synthetic infrared scenes

    NASA Astrophysics Data System (ADS)

    Ota, Clem Z.; Rollins, John M.; Bleiweiss, Max P.

    1996-06-01

    In the process of creating synthetic scenes for use in simulations/visualizations, texture is used as a surrogate to 'high' spatial definition. For example, if one were to measure the location of every blade of grass and all of the characteristics of each blade of grass in a lawn, then in the process of composing a scene of the lawn, it would be expected that the result would appear 'real;' however, because this process is excruciatingly laborious, various techniques have been devised to place the required details in the scene through the use of texturing. Experience gained during the recent Smart Weapons Operability Enhancement Joint Test and Evaluation (SWOE JT&E) has shown the need for higher fidelity texturing algorithms and a better parameterization of those that are in use. In this study, four aspects of the problem have been analyzed: texture extraction, texture insertion, texture metrics, and texture creation algorithms. The results of extracting real texture from an image, measuring it with a variety of metrics, and generating similar texture with three different algorithms is presented. These same metrics can be used to define clutter and to make comparisons between 'real' and synthetic (or artificial) scenes in an objective manner.

  15. Determination of the actual evapotranspiration by using remote sensing methods

    NASA Astrophysics Data System (ADS)

    Bora, Eser

    2017-10-01

    Evapotranspiration is so crucial for determining amount of the irrigation and the effective water management planning. Moreover, it is vital for determining agricultural drought management and determination the actual evapotranspiration ın a region is critical for early drought warning systems. The main object of this study was to assess accuracy of the remote sensing method (METRIC) by calibrating with the bowen ratio observations at the same time. The research was carried out in the west of Marmara Region, Turkey. Landsat 5 images was used to determine the metric algorithm. By using this algorithms are found. Landsat 5 images file were used to determine actual evapotranspiration and the image's date was June 11 in 2010. This date was used for calibration with available terrestrial observation by using bowen ratio in that time. Landsat images obtained from the web site, earthexplorer.usgs.gov, and results of bowen ratio taken from micrometeorology station. As a result, energy balance parameters that are net radiation, soil heat flux and latent heat flux were compared both metric algorithm and the bowen ration in the images time. The results are found so close to each other.

  16. Algorithm for lens calculations in the geometrized Maxwell theory

    NASA Astrophysics Data System (ADS)

    Kulyabov, Dmitry S.; Korolkova, Anna V.; Sevastianov, Leonid A.; Gevorkyan, Migran N.; Demidova, Anastasia V.

    2018-04-01

    Nowadays the geometric approach in optics is often used to find out media parameters based on propagation paths of the rays because in this case it is a direct problem. However inverse problem in the framework of geometrized optics is usually not given attention. The aim of this work is to demonstrate the work of the proposed the algorithm in the framework of geometrized approach to optics for solving the problem of finding the propagation path of the electromagnetic radiation depending on environmental parameters. The methods of differential geometry are used for effective metrics construction for isotropic and anisotropic media. For effective metric space ray trajectories are obtained in the form of geodesic curves. The introduced algorithm is applied to well-known objects, Maxwell and Luneburg lenses. The similarity of results obtained by classical and geometric approach is demonstrated.

  17. Calculating observables in inhomogeneous cosmologies. Part I: general framework

    NASA Astrophysics Data System (ADS)

    Hellaby, Charles; Walters, Anthony

    2018-02-01

    We lay out a general framework for calculating the variation of a set of cosmological observables, down the past null cone of an arbitrarily placed observer, in a given arbitrary inhomogeneous metric. The observables include redshift, proper motions, area distance and redshift-space density. Of particular interest are observables that are zero in the spherically symmetric case, such as proper motions. The algorithm is based on the null geodesic equation and the geodesic deviation equation, and it is tailored to creating a practical numerical implementation. The algorithm provides a method for tracking which light rays connect moving objects to the observer at successive times. Our algorithm is applied to the particular case of the Szekeres metric. A numerical implementation has been created and some results will be presented in a subsequent paper. Future work will explore the range of possibilities.

  18. Low-Complexity User Selection for Rate Maximization in MIMO Broadcast Channels with Downlink Beamforming

    PubMed Central

    Silva, Adão; Gameiro, Atílio

    2014-01-01

    We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928

  19. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  20. Methods of Measurement the Quality Metrics in a Printing System

    NASA Astrophysics Data System (ADS)

    Varepo, L. G.; Brazhnikov, A. Yu; Nagornova, I. V.; Novoselskaya, O. A.

    2018-04-01

    One of the main criteria for choosing ink as a component of printing system is scumming ability of the ink. The realization of algorithm for estimating the quality metrics in a printing system is shown. The histograms of ink rate of various printing systems are presented. A quantitative estimation of stability of offset inks emulsifiability is given.

  1. Metric Ranking of Invariant Networks with Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Changxia; Ge, Yong; Song, Qinbao

    The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A promising approach is to discover invariant relationships among the monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, because system faults usually propagate among the monitoring data and eventually leadmore » to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. Thus, a critical challenge is how to effectively and efficiently rank metrics (nodes) of invariant networks according to the anomaly levels of metrics. The ranked list of metrics will provide system experts with useful guidance for them to localize and diagnose the system faults. To this end, we propose to model the nodes and the broken links as a Markov Random Field (MRF), and develop an iteration algorithm to infer the anomaly of each node based on belief propagation (BP). Finally, we validate the proposed algorithm on both realworld and synthetic data sets to illustrate its effectiveness.« less

  2. Utility of K-Means clustering algorithm in differentiating apparent diffusion coefficient values between benign and malignant neck pathologies

    PubMed Central

    Srinivasan, A.; Galbán, C.J.; Johnson, T.D.; Chenevert, T.L.; Ross, B.D.; Mukherji, S.K.

    2014-01-01

    Purpose The objective of our study was to analyze the differences between apparent diffusion coefficient (ADC) partitions (created using the K-Means algorithm) between benign and malignant neck lesions and evaluate its benefit in distinguishing these entities. Material and methods MRI studies of 10 benign and 10 malignant proven neck pathologies were post-processed on a PC using in-house software developed in MATLAB (The MathWorks, Inc., Natick, MA). Lesions were manually contoured by two neuroradiologists with the ADC values within each lesion clustered into two (low ADC-ADCL, high ADC-ADCH) and three partitions (ADCL, intermediate ADC-ADCI, ADCH) using the K-Means clustering algorithm. An unpaired two-tailed Student’s t-test was performed for all metrics to determine statistical differences in the means between the benign and malignant pathologies. Results Statistically significant difference between the mean ADCL clusters in benign and malignant pathologies was seen in the 3 cluster models of both readers (p=0.03, 0.022 respectively) and the 2 cluster model of reader 2 (p=0.04) with the other metrics (ADCH, ADCI, whole lesion mean ADC) not revealing any significant differences. Receiver operating characteristics curves demonstrated the quantitative difference in mean ADCH and ADCL in both the 2 and 3 cluster models to be predictive of malignancy (2 clusters: p=0.008, area under curve=0.850, 3 clusters: p=0.01, area under curve=0.825). Conclusion The K-Means clustering algorithm that generates partitions of large datasets may provide a better characterization of neck pathologies and may be of additional benefit in distinguishing benign and malignant neck pathologies compared to whole lesion mean ADC alone. PMID:20007723

  3. Utility of the k-means clustering algorithm in differentiating apparent diffusion coefficient values of benign and malignant neck pathologies.

    PubMed

    Srinivasan, A; Galbán, C J; Johnson, T D; Chenevert, T L; Ross, B D; Mukherji, S K

    2010-04-01

    Does the K-means algorithm do a better job of differentiating benign and malignant neck pathologies compared to only mean ADC? The objective of our study was to analyze the differences between ADC partitions to evaluate whether the K-means technique can be of additional benefit to whole-lesion mean ADC alone in distinguishing benign and malignant neck pathologies. MR imaging studies of 10 benign and 10 malignant proved neck pathologies were postprocessed on a PC by using in-house software developed in Matlab. Two neuroradiologists manually contoured the lesions, with the ADC values within each lesion clustered into 2 (low, ADC-ADC(L); high, ADC-ADC(H)) and 3 partitions (ADC(L); intermediate, ADC-ADC(I); ADC(H)) by using the K-means clustering algorithm. An unpaired 2-tailed Student t test was performed for all metrics to determine statistical differences in the means of the benign and malignant pathologies. A statistically significant difference between the mean ADC(L) clusters in benign and malignant pathologies was seen in the 3-cluster models of both readers (P = .03 and .022, respectively) and the 2-cluster model of reader 2 (P = .04), with the other metrics (ADC(H), ADC(I); whole-lesion mean ADC) not revealing any significant differences. ROC curves demonstrated the quantitative differences in mean ADC(H) and ADC(L) in both the 2- and 3-cluster models to be predictive of malignancy (2 clusters: P = .008, area under curve = 0.850; 3 clusters: P = .01, area under curve = 0.825). The K-means clustering algorithm that generates partitions of large datasets may provide a better characterization of neck pathologies and may be of additional benefit in distinguishing benign and malignant neck pathologies compared with whole-lesion mean ADC alone.

  4. Cooperative Robot Localization Using Event-Triggered Estimation

    NASA Astrophysics Data System (ADS)

    Iglesias Echevarria, David I.

    It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.

  5. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  6. METRIC model for the estimation and mapping of evapotranspiration in a super intensive olive orchard in Southern Portugal

    NASA Astrophysics Data System (ADS)

    Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.

    2013-04-01

    Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the algorithm were tested against data collected in the field, including eddy covariance ET, surface temperature over the canopy and soil temperature in shaded and sunlit conditions. Additionally, the results were also compared with results published in the literature. The information obtained so far revealed very interesting perspectives for the use of METRIC in the estimation and mapping of ET in super intensive olive orchards. Thereby, this approach might constitute a useful tool towards the improvement of the efficiency of irrigation water management in this crop. The study described is still under way, and thus further applications of METRIC algorithm to a larger number of images and to olive groves with different tree density are planned.

  7. An Algorithm for Timely Transmission of Solicitation Messages in RPL for Energy-Efficient Node Mobility.

    PubMed

    Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok

    2017-04-19

    The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms.

  8. An Algorithm for Timely Transmission of Solicitation Messages in RPL for Energy-Efficient Node Mobility

    PubMed Central

    Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok

    2017-01-01

    The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms. PMID:28422084

  9. Algorithms for the explicit computation of Penrose diagrams

    NASA Astrophysics Data System (ADS)

    Schindler, J. C.; Aguirre, A.

    2018-05-01

    An algorithm is given for explicitly computing Penrose diagrams for spacetimes of the form . The resulting diagram coordinates are shown to extend the metric continuously and nondegenerately across an arbitrary number of horizons. The method is extended to include piecewise approximations to dynamically evolving spacetimes using a standard hypersurface junction procedure. Examples generated by an implementation of the algorithm are shown for standard and new cases. In the appendix, this algorithm is compared to existing methods.

  10. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  11. Attribute Index and Uniform Design Based Multiobjective Association Rule Mining with Evolutionary Algorithm

    PubMed Central

    Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683

  12. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans.

    PubMed

    Mendrik, Adriënne M; Vincken, Koen L; Kuijf, Hugo J; Breeuwer, Marcel; Bouvy, Willem H; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Persson, Mikael; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A; Vrooman, Henri A; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

  13. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    PubMed Central

    Mendrik, Adriënne M.; Vincken, Koen L.; Kuijf, Hugo J.; Breeuwer, Marcel; Bouvy, Willem H.; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R.; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A.; Vrooman, Henri A.; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A.

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand. PMID:26759553

  14. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  15. Real-time implementation of an interactive jazz accompaniment system

    NASA Astrophysics Data System (ADS)

    Deshpande, Nikhil

    Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.

  16. Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection

    NASA Astrophysics Data System (ADS)

    Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin

    2011-05-01

    Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.

  17. About neighborhood counting measure metric and minimum risk metric.

    PubMed

    Argentini, Andrea; Blanzieri, Enrico

    2010-04-01

    In a 2006 TPAMI paper, Wang proposed the Neighborhood Counting Measure, a similarity measure for the k-NN algorithm. In his paper, Wang mentioned the Minimum Risk Metric (MRM), an early distance measure based on the minimization of the risk of misclassification. Wang did not compare NCM to MRM because of its allegedly excessive computational load. In this comment paper, we complete the comparison that was missing in Wang's paper and, from our empirical evaluation, we show that MRM outperforms NCM and that its running time is not prohibitive as Wang suggested.

  18. MPH-M, AODV-M and DSR-M Performance Evaluation under Jamming Attacks.

    PubMed

    Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Monroy, Raul; Nolazco-Flores, Juan A

    2017-07-05

    In this work, we present the design of a mitigation scheme for jamming attacks integrated to the routing protocols MPH, AODV, and DSR. The resulting protocols are named MPH-M (Multi-Parent Hierarchical - Modified), AODV-M (Ad hoc On Demand Distance Vector - Modified), and DSR-M (Dynamic Source Routing - Modified). For the mitigation algorithm, if the detection algorithm running locally in each node produces a positive result then the node is isolated; second, the routing protocol adapts their paths avoiding the isolated nodes. We evaluated how jamming attacks affect different metrics for all these modified protocols. The metrics we employ to detect jamming attack are number of packet retransmissions, number of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) retries while waiting for an idle channel and the energy wasted by the node. The metrics to evaluate the performance of the modified routing protocols are the throughput and resilience of the system and the energy used by the nodes. We evaluated all the modified protocols when the attacker position was set near, middle and far of the collector node. The results of our evaluation show that performance for MPH-M is much better than AODV-M and DSR-M. For example, the node energy for MPH-M is 138.13% better than AODV-M and 126.07% better than DSR-M. Moreover, we also find that MPH-M benefits much more of the mitigation scheme than AODV-M and DSR-M. For example, the node energy consumption is 34.61% lower for MPH-M and only 3.92% and 3.42% for AODV-M and DSR-M, respectively. On throughput, the MPH protocol presents a packet reception efficiency at the collector node of 16.4% on to AODV and DSR when there is no mitigation mechanism. Moreover, MPH-M has an efficiency greater than 7.7% with respect to AODV-M and DSR-M when there is a mitigation scheme. In addition, we have that with the mitigation mechanism AODV-M and DSR-M do not present noticeable modification. However, MPH-M improves its efficiency by 8.4%. We also measure the resilience of these algorithms from the average packet re-transmissions perspective, and we find that MPH-M has around a 15% lower change rate than AODV-M and DSR-M. The MPH-M recovery time is 5 s faster than AODV-M and 2 s faster than DSR-M.

  19. MPH-M, AODV-M and DSR-M Performance Evaluation under Jamming Attacks

    PubMed Central

    Del-Valle-Soto, Carolina

    2017-01-01

    In this work, we present the design of a mitigation scheme for jamming attacks integrated to the routing protocols MPH, AODV, and DSR. The resulting protocols are named MPH-M (Multi-Parent Hierarchical - Modified), AODV-M (Ad hoc On Demand Distance Vector - Modified), and DSR-M (Dynamic Source Routing - Modified). For the mitigation algorithm, if the detection algorithm running locally in each node produces a positive result then the node is isolated; second, the routing protocol adapts their paths avoiding the isolated nodes. We evaluated how jamming attacks affect different metrics for all these modified protocols. The metrics we employ to detect jamming attack are number of packet retransmissions, number of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) retries while waiting for an idle channel and the energy wasted by the node. The metrics to evaluate the performance of the modified routing protocols are the throughput and resilience of the system and the energy used by the nodes. We evaluated all the modified protocols when the attacker position was set near, middle and far of the collector node. The results of our evaluation show that performance for MPH-M is much better than AODV-M and DSR-M. For example, the node energy for MPH-M is 138.13% better than AODV-M and 126.07% better than DSR-M. Moreover, we also find that MPH-M benefits much more of the mitigation scheme than AODV-M and DSR-M. For example, the node energy consumption is 34.61% lower for MPH-M and only 3.92% and 3.42% for AODV-M and DSR-M, respectively. On throughput, the MPH protocol presents a packet reception efficiency at the collector node of 16.4% on to AODV and DSR when there is no mitigation mechanism. Moreover, MPH-M has an efficiency greater than 7.7% with respect to AODV-M and DSR-M when there is a mitigation scheme. In addition, we have that with the mitigation mechanism AODV-M and DSR-M do not present noticeable modification. However, MPH-M improves its efficiency by 8.4%. We also measure the resilience of these algorithms from the average packet re-transmissions perspective, and we find that MPH-M has around a 15% lower change rate than AODV-M and DSR-M. The MPH-M recovery time is 5 s faster than AODV-M and 2 s faster than DSR-M. PMID:28678180

  20. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    PubMed

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.

  1. Toward automated assessment of health Web page quality using the DISCERN instrument.

    PubMed

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. Curved manifolds with conserved Runge-Lenz vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngome, J.-P.

    2009-12-15

    van Holten's algorithm is used to construct Runge-Lenz-type conserved quantities, induced by Killing tensors, on curved manifolds. For the generalized Taub-Newman-Unti-Tamburino metric, the most general external potential such that the combined system admits a conserved Runge-Lenz-type vector is found. In the multicenter case, the subclass of two-center metric exhibits a conserved Runge-Lenz-type scalar.

  3. Curved manifolds with conserved Runge-Lenz vectors

    NASA Astrophysics Data System (ADS)

    Ngome, J.-P.

    2009-12-01

    van Holten's algorithm is used to construct Runge-Lenz-type conserved quantities, induced by Killing tensors, on curved manifolds. For the generalized Taub-Newman-Unti-Tamburino metric, the most general external potential such that the combined system admits a conserved Runge-Lenz-type vector is found. In the multicenter case, the subclass of two-center metric exhibits a conserved Runge-Lenz-type scalar.

  4. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.

  5. Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.

    PubMed

    Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong

    2017-05-18

    In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.

  6. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.

    2016-01-01

    Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.

  7. LCD motion blur reduction: a signal processing approach.

    PubMed

    Har-Noy, Shay; Nguyen, Truong Q

    2008-02-01

    Liquid crystal displays (LCDs) have shown great promise in the consumer market for their use as both computer and television displays. Despite their many advantages, the inherent sample-and-hold nature of LCD image formation results in a phenomenon known as motion blur. In this work, we develop a method for motion blur reduction using the Richardson-Lucy deconvolution algorithm in concert with motion vector information from the scene. We further refine our approach by introducing a perceptual significance metric that allows us to weight the amount of processing performed on different regions in the image. In addition, we analyze the role of motion vector errors in the quality of our resulting image. Perceptual tests indicate that our algorithm reduces the amount of perceivable motion blur in LCDs.

  8. Kurtosis-Based Blind Source Extraction of Complex Non-Circular Signals with Application in EEG Artifact Removal in Real-Time

    PubMed Central

    Javidi, Soroush; Mandic, Danilo P.; Took, Clive Cheong; Cichocki, Andrzej

    2011-01-01

    A new class of complex domain blind source extraction algorithms suitable for the extraction of both circular and non-circular complex signals is proposed. This is achieved through sequential extraction based on the degree of kurtosis and in the presence of non-circular measurement noise. The existence and uniqueness analysis of the solution is followed by a study of fast converging variants of the algorithm. The performance is first assessed through simulations on well understood benchmark signals, followed by a case study on real-time artifact removal from EEG signals, verified using both qualitative and quantitative metrics. The results illustrate the power of the proposed approach in real-time blind extraction of general complex-valued sources. PMID:22319461

  9. Reference Manual for the System Advisor Model's Wind Power Performance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, J.; Jorgenson, J.; Gilman, P.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface andmore » as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.« less

  10. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  11. Accuracy of a new real-time continuous glucose monitoring algorithm.

    PubMed

    Keenan, D Barry; Cartaya, Raymond; Mastrototaro, John J

    2010-01-01

    Through minimally invasive sensor-based continuous glucose monitoring (CGM), individuals can manage their blood glucose (BG) levels more aggressively, thereby improving their hemoglobin A1c level, while reducing the risk of hypoglycemia. Tighter glycemic control through CGM, however, requires an accurate glucose sensor and calibration algorithm with increased performance at lower BG levels. Sensor and BG measurements for 72 adult and adolescent subjects were obtained during the course of a 26-week multicenter study evaluating the efficacy of the Paradigm REAL-Time (PRT) sensor-augmented pump system (Medtronic Diabetes, Northridge, CA) in an outpatient setting. Subjects in the study arm performed at least four daily finger stick measurements. A retrospective analysis of the data set was performed to evaluate a new calibration algorithm utilized in the Paradigm Veo insulin pump (Medtronic Diabetes) and to compare these results to performance metrics calculated for the PRT. A total of N = 7193 PRT sensor downloads for 3 days of use, as well as 90,472 temporally and nonuniformly paired data points (sensor and meter values), were evaluated, with 5841 hypoglycemic and 15,851 hyperglycemic events detected through finger stick measurements. The Veo calibration algorithm decreased the overall mean absolute relative difference by greater than 0.25 to 15.89%, with hypoglycemia sensitivity increased from 54.9% in the PRT to 82.3% in the Veo (90.5% with predictive alerts); however, hyperglycemia sensitivity was decreased only marginally from 86% in the PRT to 81.7% in the Veo. The Veo calibration algorithm, with sensor error reduced significantly in the 40- to 120-mg/dl range, improves hypoglycemia detection, while retaining accuracy at high glucose levels. 2010 Diabetes Technology Society.

  12. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.

    PubMed

    Ulusoy, Melda; Sipahi, Rifat

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.

  13. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm

    PubMed Central

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as “reversal”. Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one’s finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects’ performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects’ Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise. PMID:26849058

  14. A genetic graph-based approach for partitional clustering.

    PubMed

    Menéndez, Héctor D; Barrero, David F; Camacho, David

    2014-05-01

    Clustering is one of the most versatile tools for data analysis. In the recent years, clustering that seeks the continuity of data (in opposition to classical centroid-based approaches) has attracted an increasing research interest. It is a challenging problem with a remarkable practical interest. The most popular continuity clustering method is the spectral clustering (SC) algorithm, which is based on graph cut: It initially generates a similarity graph using a distance measure and then studies its graph spectrum to find the best cut. This approach is sensitive to the parameters of the metric, and a correct parameter choice is critical to the quality of the cluster. This work proposes a new algorithm, inspired by SC, that reduces the parameter dependency while maintaining the quality of the solution. The new algorithm, named genetic graph-based clustering (GGC), takes an evolutionary approach introducing a genetic algorithm (GA) to cluster the similarity graph. The experimental validation shows that GGC increases robustness of SC and has competitive performance in comparison with classical clustering methods, at least, in the synthetic and real dataset used in the experiments.

  15. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr

    Mobile Ad hoc NETworks (MANETs) are distributed self-organizing networks that can change locations and configure themselves on the fly. This paper focuses on an algorithmic approach for the deployment of a MANET within an enclosed area, such as a building in a disaster scenario, which can provide a robust communication infrastructure for search and rescue operations. While a virtual spring mesh (VSM) algorithm provides scalable, self-organizing, and fault-tolerant capabilities required by aMANET, the VSM lacks the MANET's capabilities of deployment mechanisms for blanket coverage of an area and does not provide an obstacle avoidance mechanism. This paper presents a newmore » technique, an extended VSM (EVSM) algorithm that provides the following novelties: (1) new control laws for exploration and expansion to provide blanket coverage, (2) virtual adaptive springs enabling the mesh to expand as necessary, (3) adapts to communications disturbances by varying the density and movement of mobile nodes, and (4) new metrics to assess the performance of the EVSM algorithm. Simulation results show that EVSM provides up to 16% more coverage and is 3.5 times faster than VSM in environments with eight obstacles.« less

  17. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  18. A content-boosted collaborative filtering algorithm for personalized training in interpretation of radiological imaging.

    PubMed

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng

    2014-08-01

    Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.

  19. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    PubMed

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.

  20. An energy efficient distance-aware routing algorithm with multiple mobile sinks for wireless sensor networks.

    PubMed

    Wang, Jin; Li, Bin; Xia, Feng; Kim, Chang-Seob; Kim, Jeong-Uk

    2014-08-18

    Traffic patterns in wireless sensor networks (WSNs) usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.

  1. An Energy Efficient Distance-Aware Routing Algorithm with Multiple Mobile Sinks for Wireless Sensor Networks

    PubMed Central

    Wang, Jin; Li, Bin; Xia, Feng; Kim, Chang-Seob; Kim, Jeong-Uk

    2014-01-01

    Traffic patterns in wireless sensor networks (WSNs) usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption. PMID:25196015

  2. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  3. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    PubMed

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  4. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  5. MoleculeNet: a benchmark for molecular machine learning† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc02664a

    PubMed Central

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N.; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S.; Leswing, Karl

    2017-01-01

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm. PMID:29629118

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, K; Huang, T; Buttler, D

    We present the C-Cat Wordnet package, an open source library for using and modifying Wordnet. The package includes four key features: an API for modifying Synsets; implementations of standard similarity metrics, implementations of well known Word Sense Disambiguation algorithms, and an implementation of the Castanet algorithm. The library is easily extendible and usable in many runtime environments. We demonstrate it's use on two standard Word Sense Disambiguation tasks and apply the Castanet algorithm to a corpus.

  7. A review and experimental study on the application of classifiers and evolutionary algorithms in EEG-based brain-machine interface systems

    NASA Astrophysics Data System (ADS)

    Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam

    2018-04-01

    Objective. Considering the importance and the near-future development of noninvasive brain-machine interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. Approach. The paper is divided into two main parts. In the first part, a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, sensory motor rhythm-BMI and event-related potentials-BMI. Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. Main results. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, linear discriminant analysis and support vector machines with respect to CVA evaluation metric, and naive Bayes with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers, Bagg-DT (bagging decision tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (bagging logistic regression) and AdaBoost (adaptive boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective invasive weed optimization (IWO) and bi-objective nondominated sorting IWO algorithms demonstrated the best performances. Significance. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event-related potentials) as well as their optimization methods through the evolutionary algorithms. In addition, experimental and statistical significance tests are carried out to study the applicability and effectiveness of the reviewed methods.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan

    Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less

  9. Defining and Enabling Resiliency of Electric Distribution Systems With Multiple Microgrids

    DOE PAGES

    Chanda, Sayonsom; Srivastava, Anurag K.

    2016-05-02

    This paper presents a method for quantifying and enabling the resiliency of a power distribution system (PDS) using analytical hierarchical process and percolation theory. Using this metric, quantitative analysis can be done to analyze the impact of possible control decisions to pro-actively enable the resilient operation of distribution system with multiple microgrids and other resources. Developed resiliency metric can also be used in short term distribution system planning. The benefits of being able to quantify resiliency can help distribution system planning engineers and operators to justify control actions, compare different reconfiguration algorithms, develop proactive control actions to avert power systemmore » outage due to impending catastrophic weather situations or other adverse events. Validation of the proposed method is done using modified CERTS microgrids and a modified industrial distribution system. Furthermore, simulation results show topological and composite metric considering power system characteristics to quantify the resiliency of a distribution system with the proposed methodology, and improvements in resiliency using two-stage reconfiguration algorithm and multiple microgrids.« less

  10. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  11. Towards a Multiscale Approach to Cybersecurity Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less

  12. Foot volume estimates based on a geometric algorithm in comparison to water displacement.

    PubMed

    Mayrovitz, H N; Sims, N; Litwin, B; Pfister, S

    2005-03-01

    Assessing lower extremity limb volume and its change during and after lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although leg volumes may be determined by tape measure and other methods, there is no metric method to routinely assess foot volumes. Exclusion of foot volumes can under- or overestimate therapeutic progress. Our aim was to develop and test a metric measurement procedure and algorithm for practicing therapists to use to estimate foot volumes. The method uses a caliper and ruler to measure foot dimensions at standardized locations and calculates foot volume (VM) by a mathematical algorithm. VM was compared to volumes measured by water displacement (Vw) in 30 subjects (60 feet) using regression analysis and limits of agreement (LOA). Vw and VM (mean +/- sd) were similar 857 +/- 150 ml vs. 859 +/- 154 ml, and were highly correlated VM = 1.00Vw + 1.67 ml, r = 0.965, p < 0.001. The LOA for absolute volume differences and percentages were respectively +/- 79.6 ml and +/- 9.28 %. These results indicate that this metric method can be a useful alternative to water displacement when foot volumes are needed, but the water displacement method is contraindicated, impractical to implement, too time consuming or is not available.

  13. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  14. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    PubMed

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  15. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks

    PubMed Central

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-01-01

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062

  16. Closing the Certification Gaps in Adaptive Flight Control Software

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2008-01-01

    Over the last five decades, extensive research has been performed to design and develop adaptive control systems for aerospace systems and other applications where the capability to change controller behavior at different operating conditions is highly desirable. Although adaptive flight control has been partially implemented through the use of gain-scheduled control, truly adaptive control systems using learning algorithms and on-line system identification methods have not seen commercial deployment. The reason is that the certification process for adaptive flight control software for use in national air space has not yet been decided. The purpose of this paper is to examine the gaps between the state-of-the-art methodologies used to certify conventional (i.e., non-adaptive) flight control system software and what will likely to be needed to satisfy FAA airworthiness requirements. These gaps include the lack of a certification plan or process guide, the need to develop verification and validation tools and methodologies to analyze adaptive controller stability and convergence, as well as the development of metrics to evaluate adaptive controller performance at off-nominal flight conditions. This paper presents the major certification gap areas, a description of the current state of the verification methodologies, and what further research efforts will likely be needed to close the gaps remaining in current certification practices. It is envisioned that closing the gap will require certain advances in simulation methods, comprehensive methods to determine learning algorithm stability and convergence rates, the development of performance metrics for adaptive controllers, the application of formal software assurance methods, the application of on-line software monitoring tools for adaptive controller health assessment, and the development of a certification case for adaptive system safety of flight.

  17. QoS-aware health monitoring system using cloud-based WBANs.

    PubMed

    Almashaqbeh, Ghada; Hayajneh, Thaier; Vasilakos, Athanasios V; Mohd, Bassam J

    2014-10-01

    Wireless Body Area Networks (WBANs) are amongst the best options for remote health monitoring. However, as standalone systems WBANs have many limitations due to the large amount of processed data, mobility of monitored users, and the network coverage area. Integrating WBANs with cloud computing provides effective solutions to these problems and promotes the performance of WBANs based systems. Accordingly, in this paper we propose a cloud-based real-time remote health monitoring system for tracking the health status of non-hospitalized patients while practicing their daily activities. Compared with existing cloud-based WBAN frameworks, we divide the cloud into local one, that includes the monitored users and local medical staff, and a global one that includes the outer world. The performance of the proposed framework is optimized by reducing congestion, interference, and data delivery delay while supporting users' mobility. Several novel techniques and algorithms are proposed to accomplish our objective. First, the concept of data classification and aggregation is utilized to avoid clogging the network with unnecessary data traffic. Second, a dynamic channel assignment policy is developed to distribute the WBANs associated with the users on the available frequency channels to manage interference. Third, a delay-aware routing metric is proposed to be used by the local cloud in its multi-hop communication to speed up the reporting process of the health-related data. Fourth, the delay-aware metric is further utilized by the association protocols used by the WBANs to connect with the local cloud. Finally, the system with all the proposed techniques and algorithms is evaluated using extensive ns-2 simulations. The simulation results show superior performance of the proposed architecture in optimizing the end-to-end delay, handling the increased interference levels, maximizing the network capacity, and tracking user's mobility.

  18. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  19. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Prognostics of Power Mosfets Under Thermal Stress Accelerated Aging Using Data-Driven and Model-Based Methodologies

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Saxena, Abhinav; Saha, Sankalita; Goebel, Kai F.

    2011-01-01

    An approach for predicting remaining useful life of power MOSFETs (metal oxide field effect transistor) devices has been developed. Power MOSFETs are semiconductor switching devices that are instrumental in electronics equipment such as those used in operation and control of modern aircraft and spacecraft. The MOSFETs examined here were aged under thermal overstress in a controlled experiment and continuous performance degradation data were collected from the accelerated aging experiment. Dieattach degradation was determined to be the primary failure mode. The collected run-to-failure data were analyzed and it was revealed that ON-state resistance increased as die-attach degraded under high thermal stresses. Results from finite element simulation analysis support the observations from the experimental data. Data-driven and model based prognostics algorithms were investigated where ON-state resistance was used as the primary precursor of failure feature. A Gaussian process regression algorithm was explored as an example for a data-driven technique and an extended Kalman filter and a particle filter were used as examples for model-based techniques. Both methods were able to provide valid results. Prognostic performance metrics were employed to evaluate and compare the algorithms.

  1. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  2. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks.

    PubMed

    Rehan, Waqas; Fischer, Stefan; Rehan, Maaz

    2016-09-12

    Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.

  3. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks

    PubMed Central

    Rehan, Waqas; Fischer, Stefan; Rehan, Maaz

    2016-01-01

    Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429

  4. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  5. Automated robust registration of grossly misregistered whole-slide images with varying stains

    NASA Astrophysics Data System (ADS)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  6. Using trend templates in a neonatal seizure algorithm improves detection of short seizures in a foetal ovine model.

    PubMed

    Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo

    2015-03-01

    Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p < 0.001, Wilcoxon signed rank test). Using trend templates might therefore aid in detection of short seizures by EEG monitoring at the NICU.

  7. A New Distance Metric for Unsupervised Learning of Categorical Data.

    PubMed

    Jia, Hong; Cheung, Yiu-Ming; Liu, Jiming

    2016-05-01

    Distance metric is the basis of many learning algorithms, and its effectiveness usually has a significant influence on the learning results. In general, measuring distance for numerical data is a tractable task, but it could be a nontrivial problem for categorical data sets. This paper, therefore, presents a new distance metric for categorical data based on the characteristics of categorical values. In particular, the distance between two values from one attribute measured by this metric is determined by both the frequency probabilities of these two values and the values of other attributes that have high interdependence with the calculated one. Dynamic attribute weight is further designed to adjust the contribution of each attribute-distance to the distance between the whole data objects. Promising experimental results on different real data sets have shown the effectiveness of the proposed distance metric.

  8. Text Authorship Identified Using the Dynamics of Word Co-Occurrence Networks

    PubMed Central

    Akimushkin, Camilo; Amancio, Diego Raphael; Oliveira, Osvaldo Novais

    2017-01-01

    Automatic identification of authorship in disputed documents has benefited from complex network theory as this approach does not require human expertise or detailed semantic knowledge. Networks modeling entire books can be used to discriminate texts from different sources and understand network growth mechanisms, but only a few studies have probed the suitability of networks in modeling small chunks of text to grasp stylistic features. In this study, we introduce a methodology based on the dynamics of word co-occurrence networks representing written texts to classify a corpus of 80 texts by 8 authors. The texts were divided into sections with equal number of linguistic tokens, from which time series were created for 12 topological metrics. Since 73% of all series were stationary (ARIMA(p, 0, q)) and the remaining were integrable of first order (ARIMA(p, 1, q)), probability distributions could be obtained for the global network metrics. The metrics exhibit bell-shaped non-Gaussian distributions, and therefore distribution moments were used as learning attributes. With an optimized supervised learning procedure based on a nonlinear transformation performed by Isomap, 71 out of 80 texts were correctly classified using the K-nearest neighbors algorithm, i.e. a remarkable 88.75% author matching success rate was achieved. Hence, purely dynamic fluctuations in network metrics can characterize authorship, thus paving the way for a robust description of large texts in terms of small evolving networks. PMID:28125703

  9. AMSR2 Soil Moisture Product Validation

    NASA Technical Reports Server (NTRS)

    Bindlish, R.; Jackson, T.; Cosh, M.; Koike, T.; Fuiji, X.; de Jeu, R.; Chan, S.; Asanuma, J.; Berg, A.; Bosch, D.; hide

    2017-01-01

    The Advanced Microwave Scanning Radiometer 2 (AMSR2) is part of the Global Change Observation Mission-Water (GCOM-W) mission. AMSR2 fills the void left by the loss of the Advanced Microwave Scanning Radiometer Earth Observing System (AMSR-E) after almost 10 years. Both missions provide brightness temperature observations that are used to retrieve soil moisture. Merging AMSR-E and AMSR2 will help build a consistent long-term dataset. Before tackling the integration of AMSR-E and AMSR2 it is necessary to conduct a thorough validation and assessment of the AMSR2 soil moisture products. This study focuses on validation of the AMSR2 soil moisture products by comparison with in situ reference data from a set of core validation sites. Three products that rely on different algorithms were evaluated; the JAXA Soil Moisture Algorithm (JAXA), the Land Parameter Retrieval Model (LPRM), and the Single Channel Algorithm (SCA). Results indicate that overall the SCA has the best performance based upon the metrics considered.

  10. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    NASA Astrophysics Data System (ADS)

    Dong, Bing; Ren, De-Qing; Zhang, Xi

    2011-08-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  11. On Deployment of Multiple Base Stations for Energy-Efficient Communication in Wireless Sensor Networks

    DOE PAGES

    Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...

    2010-01-01

    Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less

  12. Direct methanol fuel cells: A database-driven design procedure

    NASA Astrophysics Data System (ADS)

    Flipsen, S. F. J.; Spitas, C.

    2011-10-01

    To test the feasibility of DMFC systems in preliminary stages of the design process the design engineer can make use of heuristic models identifying the opportunity of DMFC systems in a specific application. In general these models are to generic and have a low accuracy. To improve the accuracy a second-order model is proposed in this paper. The second-order model consists of an evolutionary algorithm written in Mathematica, which selects a component-set satisfying the fuel-cell systems' performance requirements, places the components in 3D space and optimizes for volume. The results are presented as a 3D draft proposal together with a feasibility metric. To test the algorithm the design of DMFC system applied in the MP3 player is evaluated. The results show that volume and costs are an issue for the feasibility of the fuel-cell power-system applied in the MP3 player. The generated designs and the algorithm are evaluated and recommendations are given.

  13. Automated Video-Based Analysis of Contractility and Calcium Flux in Human-Induced Pluripotent Stem Cell-Derived Cardiomyocytes Cultured over Different Spatial Scales.

    PubMed

    Huebsch, Nathaniel; Loskill, Peter; Mandegar, Mohammad A; Marks, Natalie C; Sheehan, Alice S; Ma, Zhen; Mathur, Anurag; Nguyen, Trieu N; Yoo, Jennie C; Judge, Luke M; Spencer, C Ian; Chukka, Anand C; Russell, Caitlin R; So, Po-Lin; Conklin, Bruce R; Healy, Kevin E

    2015-05-01

    Contractile motion is the simplest metric of cardiomyocyte health in vitro, but unbiased quantification is challenging. We describe a rapid automated method, requiring only standard video microscopy, to analyze the contractility of human-induced pluripotent stem cell-derived cardiomyocytes (iPS-CM). New algorithms for generating and filtering motion vectors combined with a newly developed isogenic iPSC line harboring genetically encoded calcium indicator, GCaMP6f, allow simultaneous user-independent measurement and analysis of the coupling between calcium flux and contractility. The relative performance of these algorithms, in terms of improving signal to noise, was tested. Applying these algorithms allowed analysis of contractility in iPS-CM cultured over multiple spatial scales from single cells to three-dimensional constructs. This open source software was validated with analysis of isoproterenol response in these cells, and can be applied in future studies comparing the drug responsiveness of iPS-CM cultured in different microenvironments in the context of tissue engineering.

  14. Personalized recommendation based on heat bidirectional transfer

    NASA Astrophysics Data System (ADS)

    Ma, Wenping; Feng, Xiang; Wang, Shanfeng; Gong, Maoguo

    2016-02-01

    Personalized recommendation has become an increasing popular research topic, which aims to find future likes and interests based on users' past preferences. Traditional recommendation algorithms pay more attention to forecast accuracy by calculating first-order relevance, while ignore the importance of diversity and novelty that provide comfortable experiences for customers. There are some levels of contradictions between these three metrics, so an algorithm based on bidirectional transfer is proposed in this paper to solve this dilemma. In this paper, we agree that an object that is associated with history records or has been purchased by similar users should be introduced to the specified user and recommendation approach based on heat bidirectional transfer is proposed. Compared with the state-of-the-art approaches based on bipartite network, experiments on two benchmark data sets, Movielens and Netflix, demonstrate that our algorithm has better performance on accuracy, diversity and novelty. Moreover, this method does better in exploiting long-tail commodities and cold-start problem.

  15. Swarm intelligence for multi-objective optimization of synthesis gas production

    NASA Astrophysics Data System (ADS)

    Ganesan, T.; Vasant, P.; Elamvazuthi, I.; Ku Shaari, Ku Zilati

    2012-11-01

    In the chemical industry, the production of methanol, ammonia, hydrogen and higher hydrocarbons require synthesis gas (or syn gas). The main three syn gas production methods are carbon dioxide reforming (CRM), steam reforming (SRM) and partial-oxidation of methane (POM). In this work, multi-objective (MO) optimization of the combined CRM and POM was carried out. The empirical model and the MO problem formulation for this combined process were obtained from previous works. The central objectives considered in this problem are methane conversion, carbon monoxide selectivity and the hydrogen to carbon monoxide ratio. The MO nature of the problem was tackled using the Normal Boundary Intersection (NBI) method. Two techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) were then applied in conjunction with the NBI method. The performance of the two algorithms and the quality of the solutions were gauged by using two performance metrics. Comparative studies and results analysis were then carried out on the optimization results.

  16. Optimal expression evaluation for data parallel architectures

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    A data parallel machine represents an array or other composite data structure by allocating one processor (at least conceptually) per data item. A pointwise operation can be performed between two such arrays in unit time, provided their corresponding elements are allocated in the same processors. If the arrays are not aligned in this fashion, the cost of moving one or both of them is part of the cost of the operation. The choice of where to perform the operation then affects this cost. If an expression with several operands is to be evaluated, there may be many choices of where to perform the intermediate operations. An efficient algorithm is given to find the minimum-cost way to evaluate an expression, for several different data parallel architectures. This algorithm applies to any architecture in which the metric describing the cost of moving an array is robust. This encompasses most of the common data parallel communication architectures, including meshes of arbitrary dimension and hypercubes. Remarks are made on several variations of the problem, some of which are solved and some of which remain open.

  17. Impact of radiation attenuation by a carbon fiber couch on patient dose verification

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yen; Chou, Wen-Tsae; Liao, Yi-Jen; Lee, Jeng-Hung; Liang, Ji-An; Hsu, Shih-Ming

    2017-02-01

    The aim of this study was to understand the difference between the measured and calculated irradiation attenuations obtained using two algorithms and to identify the influence of couch attenuation on patient dose verification. We performed eight tests of couch attenuation with two photon energies, two longitudinal couch positions, and two rail positions. The couch attenuation was determined using a radiation treatment planning system. The measured and calculated attenuations were compared. We also performed 12 verifications of head-and-neck and rectum cases by using a Delta phantom. The dose deviation (DD), distance to agreement (DTA), and gamma index of pencil-beam convolution (PBC) verifications were nearly the same. The agreement was least consistent for the anisotropic analytical algorithm (AAA) without the couch for the head-and-neck case, in which the DD, DTA, and gamma index were 74.4%, 99.3%, and 89%, respectively; for the rectum case, the corresponding values were 56.2%, 95.1%, and 92.4%. We suggest that dose verification should be performed using the following three metrics simultaneously: DD, DTA, and the gamma index.

  18. Cognitive Performance Scores for the Pediatric Automated Neuropsychological Assessment Metrics in Childhood-Onset Systemic Lupus Erythematosus.

    PubMed

    Vega-Fernandez, Patricia; Vanderburgh White, Shana; Zelko, Frank; Ruth, Natasha M; Levy, Deborah M; Muscal, Eyal; Klein-Gitelman, Marisa S; Huber, Adam M; Tucker, Lori B; Roebuck-Spencer, Tresa; Ying, Jun; Brunner, Hermine I

    2015-08-01

    To develop and initially validate a global cognitive performance score (CPS) for the Pediatric Automated Neuropsychological Assessment Metrics (PedANAM) to serve as a screening tool of cognition in childhood lupus. Patients (n = 166) completed the 9 subtests of the PedANAM battery, each of which provides 3 principal performance parameters (accuracy, mean reaction time for correct responses, and throughput). Cognitive ability was measured by formal neurocognitive testing or estimated by the Pediatric Perceived Cognitive Function Questionnaire-43 to determine the presence or absence of neurocognitive dysfunction (NCD). A subset of the data was used to develop 4 candidate PedANAM-CPS indices with supervised or unsupervised statistical approaches: PedANAM-CPSUWA , i.e., unweighted averages of the accuracy scores of all PedANAM subtests; PedANAM-CPSPCA , i.e., accuracy scores of all PedANAM subtests weighted through principal components analysis; PedANAM-CPSlogit , i.e., algorithm derived from logistic models to estimate NCD status based on the accuracy scores of all of the PedANAM subtests; and PedANAM-CPSmultiscore , i.e., algorithm derived from logistic models to estimate NCD status based on select PedANAM performance parameters. PedANAM-CPS candidates were validated using the remaining data. PedANAM-CPS indices were moderately correlated with each other (|r| > 0.65). All of the PedANAM-CPS indices discriminated children by NCD status across data sets (P < 0.036). The PedANAM-CPSmultiscore had the highest area under the receiver operating characteristic curve (AUC) across all data sets for identifying NCD status (AUC >0.74), followed by the PedANAM-CPSlogit , the PedANAM-CPSPCA , and the PedANAM-CPSUWA , respectively. Based on preliminary validation and considering ease of use, the PedANAM-CPSmultiscore and the PedANAM-CPSPCA appear to be best suited as global measures of PedANAM performance. © 2015, American College of Rheumatology.

  19. Improving average ranking precision in user searches for biomedical research datasets

    PubMed Central

    Gobeill, Julien; Gaudinat, Arnaud; Vachon, Thérèse; Ruch, Patrick

    2017-01-01

    Abstract Availability of research datasets is keystone for health and life science study reproducibility and scientific progress. Due to the heterogeneity and complexity of these data, a main challenge to be overcome by research data management systems is to provide users with the best answers for their search queries. In the context of the 2016 bioCADDIE Dataset Retrieval Challenge, we investigate a novel ranking pipeline to improve the search of datasets used in biomedical experiments. Our system comprises a query expansion model based on word embeddings, a similarity measure algorithm that takes into consideration the relevance of the query terms, and a dataset categorization method that boosts the rank of datasets matching query constraints. The system was evaluated using a corpus with 800k datasets and 21 annotated user queries, and provided competitive results when compared to the other challenge participants. In the official run, it achieved the highest infAP, being +22.3% higher than the median infAP of the participant’s best submissions. Overall, it is ranked at top 2 if an aggregated metric using the best official measures per participant is considered. The query expansion method showed positive impact on the system’s performance increasing our baseline up to +5.0% and +3.4% for the infAP and infNDCG metrics, respectively. The similarity measure algorithm showed robust performance in different training conditions, with small performance variations compared to the Divergence from Randomness framework. Finally, the result categorization did not have significant impact on the system’s performance. We believe that our solution could be used to enhance biomedical dataset management systems. The use of data driven expansion methods, such as those based on word embeddings, could be an alternative to the complexity of biomedical terminologies. Nevertheless, due to the limited size of the assessment set, further experiments need to be performed to draw conclusive results. Database URL: https://biocaddie.org/benchmark-data PMID:29220475

  20. Recognizing human activities using appearance metric feature and kinematics feature

    NASA Astrophysics Data System (ADS)

    Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye

    2017-05-01

    The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.

Top