Science.gov

Sample records for accuracy selectivity robustness

  1. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  2. Accuracy vs. Robustness: Bi-criteria Optimized Ensemble of Metamodels

    DTIC Science & Technology

    2014-12-01

    Kriging , Support Vector Regression and Radial Basis Function), where uncertainties are modeled for evaluating robustness. Twenty-eight functions from...optimized ensemble framework to optimally identify the contributions from each metamodel ( Kriging , Support Vector Regression and Radial Basis Function...motivation, a bi-criteria (accuracy and robustness) ensemble optimization framework of three well-known metamodel techniques, namely Kriging (Matheron 1960

  3. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  4. Estimation and Accuracy after Model Selection

    PubMed Central

    Efron, Bradley

    2013-01-01

    Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the Cp criterion, and a Lasso-based estimation problem. PMID:25346558

  5. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  6. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  7. Robust Object Tracking Using Valid Fragments Selection

    PubMed Central

    Li, Bo; Tian, Peng; Luo, Gang

    2016-01-01

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios. PMID:27430036

  8. Imputation accuracy is robust to cattle reference genome updates.

    PubMed

    Milanesi, M; Vicario, D; Stella, A; Valentini, A; Ajmone-Marsan, P; Biffani, S; Biscarini, F; Jansen, G; Nicolazzi, E L

    2015-02-01

    Genotype imputation is routinely applied in a large number of cattle breeds. Imputation has become a need due to the large number of SNP arrays with variable density (currently, from 2900 to 777,962 SNPs). Although many authors have studied the effect of different statistical methods on imputation accuracy, the impact of a (likely) change in the reference genome assembly on imputation from lower to higher density has not been determined so far. In this work, 1021 Italian Simmental SNP genotypes were remapped on the three most recent reference genome assemblies. Four imputation methods were used to assess the impact of an update in the reference genome. As expected, the four methods behaved differently, with large differences in terms of accuracy. Updating SNP coordinates on the three tested cattle reference genome assemblies determined only a slight variation on imputation results within method.

  9. Robustness and Accuracy in Sea Urchin Developmental Gene Regulatory Networks.

    PubMed

    Ben-Tabou de-Leon, Smadar

    2016-01-01

    Developmental gene regulatory networks robustly control the timely activation of regulatory and differentiation genes. The structure of these networks underlies their capacity to buffer intrinsic and extrinsic noise and maintain embryonic morphology. Here I illustrate how the use of specific architectures by the sea urchin developmental regulatory networks enables the robust control of cell fate decisions. The Wnt-βcatenin signaling pathway patterns the primary embryonic axis while the BMP signaling pathway patterns the secondary embryonic axis in the sea urchin embryo and across bilateria. Interestingly, in the sea urchin in both cases, the signaling pathway that defines the axis controls directly the expression of a set of downstream regulatory genes. I propose that this direct activation of a set of regulatory genes enables a uniform regulatory response and a clear cut cell fate decision in the endoderm and in the dorsal ectoderm. The specification of the mesodermal pigment cell lineage is activated by Delta signaling that initiates a triple positive feedback loop that locks down the pigment specification state. I propose that the use of compound positive feedback circuitry provides the endodermal cells enough time to turn off mesodermal genes and ensures correct mesoderm vs. endoderm fate decision. Thus, I argue that understanding the control properties of repeatedly used regulatory architectures illuminates their role in embryogenesis and provides possible explanations to their resistance to evolutionary change.

  10. Interspecies translation of disease networks increases robustness and predictive accuracy.

    PubMed

    Anvar, Seyed Yahya; Tucker, Allan; Vinciotti, Veronica; Venema, Andrea; van Ommen, Gert-Jan B; van der Maarel, Silvere M; Raz, Vered; 't Hoen, Peter A C

    2011-11-01

    Gene regulatory networks give important insights into the mechanisms underlying physiology and pathophysiology. The derivation of gene regulatory networks from high-throughput expression data via machine learning strategies is problematic as the reliability of these models is often compromised by limited and highly variable samples, heterogeneity in transcript isoforms, noise, and other artifacts. Here, we develop a novel algorithm, dubbed Dandelion, in which we construct and train intraspecies Bayesian networks that are translated and assessed on independent test sets from other species in a reiterative procedure. The interspecies disease networks are subjected to multi-layers of analysis and evaluation, leading to the identification of the most consistent relationships within the network structure. In this study, we demonstrate the performance of our algorithms on datasets from animal models of oculopharyngeal muscular dystrophy (OPMD) and patient materials. We show that the interspecies network of genes coding for the proteasome provide highly accurate predictions on gene expression levels and disease phenotype. Moreover, the cross-species translation increases the stability and robustness of these networks. Unlike existing modeling approaches, our algorithms do not require assumptions on notoriously difficult one-to-one mapping of protein orthologues or alternative transcripts and can deal with missing data. We show that the identified key components of the OPMD disease network can be confirmed in an unseen and independent disease model. This study presents a state-of-the-art strategy in constructing interspecies disease networks that provide crucial information on regulatory relationships among genes, leading to better understanding of the disease molecular mechanisms.

  11. Accuracy and robustness of Kinect pose estimation in the context of coaching of elderly population.

    PubMed

    Obdrzálek, Stepán; Kurillo, Gregorij; Ofli, Ferda; Bajcsy, Ruzena; Seto, Edmund; Jimison, Holly; Pavel, Michael

    2012-01-01

    The Microsoft Kinect camera is becoming increasingly popular in many areas aside from entertainment, including human activity monitoring and rehabilitation. Many people, however, fail to consider the reliability and accuracy of the Kinect human pose estimation when they depend on it as a measuring system. In this paper we compare the Kinect pose estimation (skeletonization) with more established techniques for pose estimation from motion capture data, examining the accuracy of joint localization and robustness of pose estimation with respect to the orientation and occlusions. We have evaluated six physical exercises aimed at coaching of elderly population. Experimental results present pose estimation accuracy rates and corresponding error bounds for the Kinect system.

  12. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  13. Turning science on robust cattle into improved genetic selection decisions.

    PubMed

    Amer, P R

    2012-04-01

    More robust cattle have the potential to increase farm profitability, improve animal welfare, reduce the contribution of ruminant livestock to greenhouse gas emissions and decrease the risk of food shortages in the face of increased variability in the farm environment. Breeding is a powerful tool for changing the robustness of cattle; however, insufficient recording of breeding goal traits and selection of animals at younger ages tend to favour genetic change in productivity traits relative to robustness traits. This paper has extended a previously proposed theory of artificial evolution to demonstrate, using deterministic simulation, how choice of breeding scheme design can be used as a tool to manipulate the direction of genetic progress, whereas the breeding goal remains focussed on the factors motivating individual farm decision makers. Particular focus was placed on the transition from progeny testing or mass selection to genomic selection breeding strategies. Transition to genomic selection from a breeding strategy where candidates are selected before records from progeny being available was shown to be highly likely to favour genetic progress in robustness traits relative to productivity traits. This was shown even with modest numbers of animals available for training and when heritability for robustness traits was only slightly lower than that for productivity traits. When transitioning from progeny testing to a genomic selection strategy without progeny testing, it was shown that there is a significant risk that robustness traits could become less influential in selection relative to productivity traits. Augmentations of training populations using genotyped cows and support for industry-wide improvements in phenotypic recording of robustness traits were put forward as investment opportunities for stakeholders wishing to facilitate the application of science on robust cattle into improved genetic selection schemes.

  14. Robustness of single-electron pumps at sub-ppm current accuracy level

    NASA Astrophysics Data System (ADS)

    Stein, F.; Scherer, H.; Gerster, T.; Behr, R.; Götz, M.; Pesel, E.; Leicht, C.; Ubbelohde, N.; Weimann, T.; Pierz, K.; Schumacher, H. W.; Hohls, F.

    2017-02-01

    We report on characterizations of single-electron pumps at the highest accuracy level, enabled by improvements of the small-current measurement technique. With these improvements a new accuracy record in measurements on single-electron pumps is demonstrated: 0.16 µA · A-1 of relative combined uncertainty was reached within less than 1 d of measurement time. Additionally, robustness tests of pump operation on a sub-ppm level revealed a good stability of tunable-barrier single-electron pumps against variations in the operating parameters.

  15. Efficiency, Selectivity, and Robustness of Nucleocytoplasmic Transport

    PubMed Central

    Zilman, Anton; Di Talia, Stefano; Chait, Brian T; Rout, Michael P; Magnasco, Marcelo O

    2007-01-01

    All materials enter or exit the cell nucleus through nuclear pore complexes (NPCs), efficient transport devices that combine high selectivity and throughput. NPC-associated proteins containing phenylalanine–glycine repeats (FG nups) have large, flexible, unstructured proteinaceous regions, and line the NPC. A central feature of NPC-mediated transport is the binding of cargo-carrying soluble transport factors to the unstructured regions of FG nups. Here, we model the dynamics of nucleocytoplasmic transport as diffusion in an effective potential resulting from the interaction of the transport factors with the flexible FG nups, using a minimal number of assumptions consistent with the most well-established structural and functional properties of NPC transport. We discuss how specific binding of transport factors to the FG nups facilitates transport, and how this binding and competition between transport factors and other macromolecules for binding sites and space inside the NPC accounts for the high selectivity of transport. We also account for why transport is relatively insensitive to changes in the number and distribution of FG nups in the NPC, providing an explanation for recent experiments where up to half the total mass of the FG nups has been deleted without abolishing transport. Our results suggest strategies for the creation of artificial nanomolecular sorting devices. PMID:17630825

  16. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  17. Accuracy of genomic selection in European maize elite breeding populations.

    PubMed

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  18. Accuracy of genomic selection for BCWD resistance in rainbow trout

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonids. In this study, we aimed to (1) predict genomic breeding values (GEBV) by genotyping training (n=583) and validation samples (n=53) with a SNP50K chip; and (2) assess the accuracy of genomic selection (GS) for BCWD r...

  19. Selective Gammatone Envelope Feature for Robust Sound Event Recognition

    NASA Astrophysics Data System (ADS)

    Leng, Yi Ren; Tran, Huy Dat; Kitaoka, Norihide; Li, Haizhou

    Conventional features for Automatic Speech Recognition and Sound Event Recognition such as Mel-Frequency Cepstral Coefficients (MFCCs) have been shown to perform poorly in noisy conditions. We introduce an auditory feature based on the gammatone filterbank, the Selective Gammatone Envelope Feature (SGEF), for Robust Sound Event Recognition where channel selection and the filterbank envelope is used to reduce the effect of noise for specific noise environments. In the experiments with Hidden Markov Model (HMM) recognizers, we shall show that our feature outperforms MFCCs significantly in four different noisy environments at various signal-to-noise ratios.

  20. Robustness and epistasis in mutation-selection models

    NASA Astrophysics Data System (ADS)

    Wolff, Andrea; Krug, Joachim

    2009-09-01

    We investigate the fitness advantage associated with the robustness of a phenotype against deleterious mutations using deterministic mutation-selection models of a quasispecies type equipped with a mesa-shaped fitness landscape. We obtain analytic results for the robustness effect which become exact in the limit of infinite sequence length. Thereby, we are able to clarify a seeming contradiction between recent rigorous work and an earlier heuristic treatment based on mapping to a Schrödinger equation. We exploit the quantum mechanical analogy to calculate a correction term for finite sequence lengths and verify our analytic results by numerical studies. In addition, we investigate the occurrence of an error threshold for a general class of epistatic landscapes and show that diminishing epistasis is a necessary but not sufficient condition for error threshold behaviour.

  1. Robust Statistical Label Fusion through Consensus Level, Labeler Accuracy and Truth Estimation (COLLATE)

    PubMed Central

    Asman, Andrew J.; Landman, Bennett A.

    2011-01-01

    Segmentation and delineation of structures of interest in medical images is paramount to quantifying and characterizing structural, morphological, and functional correlations with clinically relevant conditions. The established gold standard for performing segmentation has been manual voxel-by-voxel labeling by a neuroanatomist expert. This process can be extremely time consuming, resource intensive and fraught with high inter-observer variability. Hence, studies involving characterizations of novel structures or appearances have been limited in scope (numbers of subjects), scale (extent of regions assessed), and statistical power. Statistical methods to fuse data sets from several different sources (e.g., multiple human observers) have been proposed to simultaneously estimate both rater performance and the ground truth labels. However, with empirical datasets, statistical fusion has been observed to result in visually inconsistent findings. So, despite the ease and elegance of a statistical approach, single observers and/or direct voting are often used in practice. Hence, rater performance is not systematically quantified and exploited during label estimation. To date, statistical fusion methods have relied on characterizations of rater performance that do not intrinsically include spatially varying models of rater performance. Herein, we present a novel, robust statistical label fusion algorithm to estimate and account for spatially varying performance. This algorithm, COnsensus Level, Labeler Accuracy and Truth Estimation (COLLATE), is based on the simple idea that some regions of an image are difficult to label (e.g., confusion regions: boundaries or low contrast areas) while other regions are intrinsically obvious (e.g., consensus regions: centers of large regions or high contrast edges). Unlike its predecessors, COLLATE estimates the consensus level of each voxel and estimates differing models of observer behavior in each region. We show that COLLATE provides

  2. On accuracy, robustness, and security of bag-of-word search systems

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Svyatoslav; Diephuis, Maurits; Kostadinov, Dimche; Farhadzadeh, Farzad; Holotyak, Taras

    2014-02-01

    In this paper, we present a statistical framework for the analysis of the performance of Bag-of-Words (BOW) systems. The paper aims at establishing a better understanding of the impact of different elements of BOW systems such as the robustness of descriptors, accuracy of assignment, descriptor compression and pooling and finally decision making. We also study the impact of geometrical information on the BOW system performance and compare the results with different pooling strategies. The proposed framework can also be of interest for a security and privacy analysis of BOW systems. The experimental results on real images and descriptors confirm our theoretical findings. Notation: We use capital letters to denote scalar random variables X and X to denote vector random variables, corresponding small letters x and x to denote the realisations of scalar and vector random variables, respectively. We use X pX(x) or simply X p(x) to indicate that a random variable X is distributed according to pX(x). N(μ, σ 2 X ) stands for the Gaussian distribution with mean μ and variance σ2 X . B(L, Pb) denotes the binomial distribution with sequence length L and probability of success Pb. ||.|| denotes the Euclidean vector norm and Q(.) stands for the Q-function. D(.||.) denotes the divergence and E{.} denotes the expectation.

  3. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  4. The signatures of selection for translational accuracy in plant genes.

    PubMed

    Porceddu, Andrea; Zenoni, Sara; Camiolo, Salvatore

    2013-01-01

    Little is known about the natural selection of synonymous codons within the coding sequences of plant genes. We analyzed the distribution of synonymous codons within plant coding sequences and found that preferred codons tend to encode the more conserved and functionally important residues of plant proteins. This was consistent among several synonymous codon families and applied to genes with different expression profiles and functions. Most of the randomly chosen alternative sets of codons scored weaker associations than the actual sets of preferred codons, suggesting that codon position within plant genes and codon usage bias have coevolved to maximize translational accuracy. All these findings are consistent with the mistranslation-induced protein misfolding theory, which predicts the natural selection of highly preferred codons more frequently at sites where translation errors could compromise protein folding or functionality. Our results will provide an important insight in future studies of protein folding, molecular evolution, and transgene design for optimal expression.

  5. Impact of selective genotyping in the training population on accuracy and bias of genomic selection.

    PubMed

    Zhao, Yusheng; Gowda, Manje; Longin, Friedrich H; Würschum, Tobias; Ranc, Nicolas; Reif, Jochen C

    2012-08-01

    Estimating marker effects based on routinely generated phenotypic data of breeding programs is a cost-effective strategy to implement genomic selection. Truncation selection in breeding populations, however, could have a strong impact on the accuracy to predict genomic breeding values. The main objective of our study was to investigate the influence of phenotypic selection on the accuracy and bias of genomic selection. We used experimental data of 788 testcross progenies from an elite maize breeding program. The testcross progenies were evaluated in unreplicated field trials in ten environments and fingerprinted with 857 SNP markers. Random regression best linear unbiased prediction method was used in combination with fivefold cross-validation based on genotypic sampling. We observed a substantial loss in the accuracy to predict genomic breeding values in unidirectional selected populations. In contrast, estimating marker effects based on bidirectional selected populations led to only a marginal decrease in the prediction accuracy of genomic breeding values. We concluded that bidirectional selection is a valuable approach to efficiently implement genomic selection in applied plant breeding programs.

  6. Robust nonlinear variable selective control for networked systems

    NASA Astrophysics Data System (ADS)

    Rahmani, Behrooz

    2016-10-01

    This paper is concerned with the networked control of a class of uncertain nonlinear systems. In this way, Takagi-Sugeno (T-S) fuzzy modelling is used to extend the previously proposed variable selective control (VSC) methodology to nonlinear systems. This extension is based upon the decomposition of the nonlinear system to a set of fuzzy-blended locally linearised subsystems and further application of the VSC methodology to each subsystem. To increase the applicability of the T-S approach for uncertain nonlinear networked control systems, this study considers the asynchronous premise variables in the plant and the controller, and then introduces a robust stability analysis and control synthesis. The resulting optimal switching-fuzzy controller provides a minimum guaranteed cost on an H2 performance index. Simulation studies on three nonlinear benchmark problems demonstrate the effectiveness of the proposed method.

  7. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  8. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  9. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  10. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Dittrich, André; Weinmann, Martin; Hinz, Stefan

    2017-04-01

    In photogrammetry, remote sensing, computer vision and robotics, a topic of major interest is represented by the automatic analysis of 3D point cloud data. This task often relies on the use of geometric features amongst which particularly the ones derived from the eigenvalues of the 3D structure tensor (e.g. the three dimensionality features of linearity, planarity and sphericity) have proven to be descriptive and are therefore commonly involved for classification tasks. Although these geometric features are meanwhile considered as standard, very little attention has been paid to their accuracy and robustness. In this paper, we hence focus on the influence of discretization and noise on the most commonly used geometric features. More specifically, we investigate the accuracy and robustness of the eigenvalues of the 3D structure tensor and also of the features derived from these eigenvalues. Thereby, we provide both analytical and numerical considerations which clearly reveal that certain features are more susceptible to discretization and noise whereas others are more robust.

  11. Robustness of chemometrics-based feature selection methods in early cancer detection and biomarker discovery.

    PubMed

    Lee, Hae Woo; Lawton, Carl; Na, Young Jeong; Yoon, Seongkyu

    2013-03-13

    In omics studies aimed at the early detection and diagnosis of cancer, bioinformatics tools play a significant role when analyzing high dimensional, complex datasets, as well as when identifying a small set of biomarkers. However, in many cases, there are ambiguities in the robustness and the consistency of the discovered biomarker sets, since the feature selection methods often lead to irreproducible results. To address this, both the stability and the classification power of several chemometrics-based feature selection algorithms were evaluated using the Monte Carlo sampling technique, aiming at finding the most suitable feature selection methods for early cancer detection and biomarker discovery. To this end, two data sets were analyzed, which comprised of MALDI-TOF-MS and LC/TOF-MS spectra measured on serum samples in order to diagnose ovarian cancer. Using these datasets, the stability and the classification power of multiple feature subsets found by different feature selection methods were quantified by varying either the number of selected features, or the number of samples in the training set, with special emphasis placed on the property of stability. The results show that high consistency does not necessarily guarantee high predictive power. In addition, differences in the stability, as well as agreement in feature lists between several feature selection methods, depend on several factors, such as the number of available samples, feature sizes, quality of the information in the dataset, etc. Among the tested methods, only the variable importance in projection (VIP)-based method shows complementary properties, providing both highly consistent and accurate subsets of features. In addition, successive projection analysis (SPA) was excellent with regards to maintaining high stability over a wide range of experimental conditions. The stability of several feature selection methods is highly variable, stressing the importance of making the proper choice among

  12. Selection for distinct gene expression properties favours the evolution of mutational robustness in gene regulatory networks.

    PubMed

    Espinosa-Soto, C

    2016-11-01

    Mutational robustness is a genotype's tendency to keep a phenotypic trait with little and few changes in the face of mutations. Mutational robustness is both ubiquitous and evolutionarily important as it affects in different ways the probability that new phenotypic variation arises. Understanding the origins of robustness is specially relevant for systems of development that are phylogenetically widespread and that construct phenotypic traits with a strong impact on fitness. Gene regulatory networks are examples of this class of systems. They comprise sets of genes that, through cross-regulation, build the gene activity patterns that define cellular responses, different tissues or distinct cell types. Several empirical observations, such as a greater robustness of wild-type phenotypes, suggest that stabilizing selection underlies the evolution of mutational robustness. However, the role of selection in the evolution of robustness is still under debate. Computer simulations of the dynamics and evolution of gene regulatory networks have shown that selection for any gene activity pattern that is steady and self-sustaining is sufficient to promote the evolution of mutational robustness. Here, I generalize this scenario using a computational model to show that selection for different aspects of a gene activity phenotype increases mutational robustness. Mutational robustness evolves even when selection favours properties that conflict with the stationarity of a gene activity pattern. The results that I present support an important role for stabilizing selection in the evolution of robustness in gene regulatory networks.

  13. Joint feature-sample selection and robust diagnosis of Parkinson's disease from MRI data.

    PubMed

    Adeli, Ehsan; Shi, Feng; An, Le; Wee, Chong-Yaw; Wu, Guorong; Wang, Tao; Shen, Dinggang

    2016-11-01

    Parkinson's disease (PD) is an overwhelming neurodegenerative disorder caused by deterioration of a neurotransmitter, known as dopamine. Lack of this chemical messenger impairs several brain regions and yields various motor and non-motor symptoms. Incidence of PD is predicted to double in the next two decades, which urges more research to focus on its early diagnosis and treatment. In this paper, we propose an approach to diagnose PD using magnetic resonance imaging (MRI) data. Specifically, we first introduce a joint feature-sample selection (JFSS) method for selecting an optimal subset of samples and features, to learn a reliable diagnosis model. The proposed JFSS model effectively discards poor samples and irrelevant features. As a result, the selected features play an important role in PD characterization, which will help identify the most relevant and critical imaging biomarkers for PD. Then, a robust classification framework is proposed to simultaneously de-noise the selected subset of features and samples, and learn a classification model. Our model can also de-noise testing samples based on the cleaned training data. Unlike many previous works that perform de-noising in an unsupervised manner, we perform supervised de-noising for both training and testing data, thus boosting the diagnostic accuracy. Experimental results on both synthetic and publicly available PD datasets show promising results. To evaluate the proposed method, we use the popular Parkinson's progression markers initiative (PPMI) database. Our results indicate that the proposed method can differentiate between PD and normal control (NC), and outperforms the competing methods by a relatively large margin. It is noteworthy to mention that our proposed framework can also be used for diagnosis of other brain disorders. To show this, we have also conducted experiments on the widely-used ADNI database. The obtained results indicate that our proposed method can identify the imaging biomarkers and

  14. DNA Template Dependent Accuracy Variation of Nucleotide Selection in Transcription

    PubMed Central

    Mellenius, Harriet; Ehrenberg, Måns

    2015-01-01

    It has been commonly assumed that the effect of erroneous transcription of DNA genes into messenger RNAs on peptide sequence errors are masked by much more frequent errors of mRNA translation to protein. We present a theoretical model of transcriptional accuracy. It uses experimentally estimated standard free energies of double-stranded DNA and RNA/DNA hybrids and predicts a DNA template dependent transcriptional accuracy variation spanning several orders of magnitude. The model also identifies high-error as well a high-accuracy transcription motifs. The source of the large accuracy span is the context dependent variation of the stacking free energy of pairs of correct and incorrect base pairs in the ever moving transcription bubble. Our model predictions have direct experimental support from recent single molecule based identifications of transcriptional errors in the C. elegans transcriptome. Our conclusions challenge the general view that amino acid substitution errors in proteins are mainly caused by translational errors. It suggests instead that transcriptional error hotspots are the dominating source of peptide sequence errors in some DNA template contexts, while mRNA translation is the major cause of protein errors in other contexts. PMID:25799551

  15. DNA template dependent accuracy variation of nucleotide selection in transcription.

    PubMed

    Mellenius, Harriet; Ehrenberg, Måns

    2015-01-01

    It has been commonly assumed that the effect of erroneous transcription of DNA genes into messenger RNAs on peptide sequence errors are masked by much more frequent errors of mRNA translation to protein. We present a theoretical model of transcriptional accuracy. It uses experimentally estimated standard free energies of double-stranded DNA and RNA/DNA hybrids and predicts a DNA template dependent transcriptional accuracy variation spanning several orders of magnitude. The model also identifies high-error as well a high-accuracy transcription motifs. The source of the large accuracy span is the context dependent variation of the stacking free energy of pairs of correct and incorrect base pairs in the ever moving transcription bubble. Our model predictions have direct experimental support from recent single molecule based identifications of transcriptional errors in the C. elegans transcriptome. Our conclusions challenge the general view that amino acid substitution errors in proteins are mainly caused by translational errors. It suggests instead that transcriptional error hotspots are the dominating source of peptide sequence errors in some DNA template contexts, while mRNA translation is the major cause of protein errors in other contexts.

  16. Tissue probability map constrained CLASSIC for increased accuracy and robustness in serial image segmentation

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Shen, Dinggang; Wong, Stephen T. C.

    2009-02-01

    Traditional fuzzy clustering algorithms have been successfully applied in MR image segmentation for quantitative morphological analysis. However, the clustering results might be biased due to the variability of tissue intensities and anatomical structures. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serialMR brain image segmentation for longitudinal study of human brains. The tissue probability maps consist of segmentation priors obtained from a population and reflect the probability of different tissue types. More accurate image segmentation can be achieved by using these segmentation priors in the clustering algorithm. Experimental results of both simulated longitudinal MR brain data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) data using the new serial image segmentation algorithm in the framework of CLASSIC show more accurate and robust longitudinal measures.

  17. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism

    PubMed Central

    Zhong, Bineng; Zhang, Jun; Wang, Pengfei; Du, Jixiang; Chen, Duansheng

    2016-01-01

    To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. PMID:27575684

  18. Selecting fillers on emotional appearance improves lineup identification accuracy.

    PubMed

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  19. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2016-11-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  20. Keyframe selection for robust pose estimation in laparoscopic videos

    NASA Astrophysics Data System (ADS)

    von Öhsen, Udo; Marcinczak, Jan Marek; Mármol Vélez, Andres F.; Grigat, Rolf-Rainer

    2012-02-01

    Motion estimation based on point correspondences in two views is a classic problem in computer vision. In the field of laparoscopic video sequences - even with state of the art algorithms - a stable motion estimation can not be guaranteed generally. Typically, a video from a laparoscopic surgery contains sequences where the surgeon barely moves the endoscope. Such restricted movement causes a small ratio between baseline and distance leading to unstable estimation results. Exploiting the fact that the entire sequence is known a priori, we propose an algorithm for keyframe selection in a sequence of images. The key idea can be expressed as follows: if all combination of frames in a sequence are scored, the optimal solution can be described as a weighted directed graph problem. We adapt the widely known Dijkstras Algorithm to find the best selection of frames.1 The framework for keyframe selection can be used universally to find the best combination of frames for any reliable scoring function. For instance, forward motion ensures the most accurate camera position estimation, whereas sideward motion is preferred in the sense of reconstruction. Based on the distribution and the disparity of point correspondences, we propose a scoring function which is capable of detecting poorly conditioned pairs of frames. We demonstrate the potential of the algorithm focusing on accurate camera positions. A robot system provides ground truth data. The environment in laparoscopic videos is reflected by an industrial endoscope and a phantom.

  1. Accuracy, sensitivity and robustness of five different methods for the estimation of gait temporal parameters using a single inertial sensor mounted on the lower trunk.

    PubMed

    Trojaniello, Diana; Cereatti, Andrea; Della Croce, Ugo

    2014-09-01

    In the last decade, various methods for the estimation of gait events and temporal parameters from the acceleration signals of a single inertial measurement unit (IMU) mounted at waist level have been proposed. Despite the growing interest for such methodologies, a thorough comparative analysis of methods with regards to number of extra and missed events, accuracy and robustness to IMU location is still missing in the literature. The aim of this work was to fill this gap. Five methods have been tested on single IMU data acquired from fourteen healthy subjects walking while being recorded by a stereo-photogrammetric system and two force platforms. The sensitivity in detecting initial and final contacts varied between 81% and 100% across methods, whereas the positive predictive values ranged between 94% and 100%. For all tested methods, stride and step time estimates were obtained; three of the selected methods also allowed estimation of stance, swing and double support time. Results showed that the accuracy in estimating step and stride durations was acceptable for all methods. Conversely, a statistical difference was found in the error in estimating stance, swing and double support time, due to the larger errors in the final contact determination. Except for one method, the IMU positioning on the lower trunk did not represent a critical factor for the estimation of gait temporal parameters. Results obtained in this study may not be applicable to pathologic gait.

  2. The accuracy of marker-assisted selection for quantitative traits within populations in linkage equilibrium.

    PubMed Central

    Ollivier, L

    1998-01-01

    Using the concept of conditional coancestry, given observed markers, an explicit expression of the accuracy of marker-based selection is derived in situations of linkage equilibrium between markers and quantitative trait loci (QTL), for the general case of full-sib families nested within half-sib families. Such a selection scheme is rather inaccurate for moderate values of family sizes and QTL variance, and the accuracies predicted for linkage disequilibrium can never be reached. The result is used to predict the accuracy of marker-assisted combined selection (MACS) and is shown to agree with previous MACS results obtained by simulation of a best linear unbiased prediction animal model. Low gains in accuracy are generally to be expected compared to standard combined selection. The maximum gain, assuming infinite family size and all QTLs marked, is about 50%. PMID:9539449

  3. Selecting Reliable and Robust Freshwater Macroalgae for Biomass Applications

    PubMed Central

    Lawton, Rebecca J.; de Nys, Rocky; Paul, Nicholas A.

    2013-01-01

    Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m−2 day−1), lowest ash content (3–8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO2 across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential

  4. Selecting reliable and robust freshwater macroalgae for biomass applications.

    PubMed

    Lawton, Rebecca J; de Nys, Rocky; Paul, Nicholas A

    2013-01-01

    Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹), lowest ash content (3-8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential

  5. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  6. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    SciTech Connect

    PORTER, REID B.; LOVELAND, ROHAN; ROSTEN, ED

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  7. New accuracy estimators for genomic selection with application in a cassava (Manihot esculenta) breeding program.

    PubMed

    Azevedo, C F; Resende, M D V; Silva, F F; Viana, J M S; Valente, M S F; Resende, M F R; Oliveira, E J

    2016-10-05

    Genomic selection is the main force driving applied breeding programs and accuracy is the main measure for evaluating its efficiency. The traditional estimator (TE) of experimental accuracy is not fully adequate. This study proposes and evaluates the performance and efficiency of two new accuracy estimators, called regularized estimator (RE) and hybrid estimator (HE), which were applied to a practical cassava breeding program and also to simulated data. The simulation study considered two individual narrow sense heritability levels and two genetic architectures for traits. TE, RE, and HE were compared under four validation procedures: without validation (WV), independent validation, ten-fold validation through jacknife allowing different markers, and with the same markers selected in each cycle. RE presented accuracies closer to the parametric ones and less biased and more precise ones than TE. HE proved to be very effective in the WV procedure. The estimators were applied to five traits evaluated in a cassava experiment, including 358 clones genotyped for 390 SNPs. Accuracies ranged from 0.67 to 1.12 with TE and from 0.22 to 0.51 with RE. These results indicated that TE overestimated the accuracy and led to one accuracy estimate (1.12) higher than one, which is outside of the parameter space. Use of RE turned the accuracy into the parameter space. Cassava breeding programs can be more realistically implemented using the new estimators proposed in this study, providing less risky practical inferences.

  8. Some scale-free networks could be robust under selective node attacks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Huang, Dan; Li, Deyi; Chen, Guisheng; Lan, Wenfei

    2011-04-01

    It is a mainstream idea that scale-free network would be fragile under the selective attacks. Internet is a typical scale-free network in the real world, but it never collapses under the selective attacks of computer viruses and hackers. This phenomenon is different from the deduction of the idea above because this idea assumes the same cost to delete an arbitrary node. Hence this paper discusses the behaviors of the scale-free network under the selective node attack with different cost. Through the experiments on five complex networks, we show that the scale-free network is possibly robust under the selective node attacks; furthermore, the more compact the network is, and the larger the average degree is, then the more robust the network is; with the same average degrees, the more compact the network is, the more robust the network is. This result would enrich the theory of the invulnerability of the network, and can be used to build robust social, technological and biological networks, and also has the potential to find the target of drugs.

  9. Effects of implant angulation, material selection, and impression technique on impression accuracy: a preliminary laboratory study.

    PubMed

    Rutkunas, Vygandas; Sveikata, Kestutis; Savickas, Raimondas

    2012-01-01

    The aim of this preliminary laboratory study was to evaluate the effects of 5- and 25-degree implant angulations in simulated clinical casts on an impression's accuracy when using different impression materials and tray selections. A convenience sample of each implant angulation group was selected for both open and closed trays in combination with one polyether and two polyvinyl siloxane impression materials. The influence of material and technique appeared to be significant for both 5- and 25-degree angulations (P < .05), and increased angulation tended to decrease impression accuracy. The open-tray technique was more accurate with highly nonaxially oriented implants for the small sample size investigated.

  10. Robust cyclohexanone selective chemiresistors based on single-walled carbon nanotubes.

    PubMed

    Frazier, Kelvin M; Swager, Timothy M

    2013-08-06

    Functionalized single-walled carbon nanotube (SWCNT)-based chemiresistors are reported for a highly robust and sensitive gas sensor to selectively detect cyclohexanone, a target analyte for explosive detection. The trifunctional selector has three important properties: it noncovalently functionalizes SWCNTs with cofacial π-π interactions, it binds to cyclohexanone via hydrogen bond (mechanistic studies were investigated), and it improves the overall robustness of SWCNT-based chemiresistors (e.g., humidity and heat). Our sensors produced reversible and reproducible responses in less than 30 s to 10 ppm of cyclohexanone and displayed an average theoretical limit of detection (LOD) of 5 ppm.

  11. Simulation-based planning for peacekeeping operations: selection of robust plans

    NASA Astrophysics Data System (ADS)

    Cekova, Cvetelina; Chandrasekaran, B.; Josephson, John; Pantaleev, Aleksandar

    2006-05-01

    This research is part of a proposed shift in emphasis in decision support from optimality to robustness. Computer simulation is emerging as a useful tool in planning courses of action (COAs). Simulations require domain models, but there is an inevitable gap between models and reality - some aspects of reality are not represented at all, and what is represented may contain errors. As models are aggregated from multiple sources, the decision maker is further insulated from even an awareness of model weaknesses. To realize the full power of computer simluations to support decision making, decision support systems should support the planner in exporing the robustness of COAs in the face of potential weaknesses in simulation models. This paper demonstrates a method of exploring the robustness of a COA with respect to specific model assumptions about whose accuracy the decision maker might have concerns. The domain is that of peacekeeping in a country where three differenct demographic groups co-exist in tension. An external peacekeeping force strives to achieve stability, an improved economy, and a higher degree of democracy in the country. A proposed COA for such a force is simluated multiple times while varying the assumptions. A visual data analysis tool is used to explore COA robustness. The aim is to help the decision maker choose a COA that is likely to be successful even in the face of potential errors in the assumptions in the models.

  12. Science of Test Measurement Accuracy - Data Sampling and Filter Selection during Data Acquisition

    DTIC Science & Technology

    2015-06-01

    sampling rate, aliasing, filtering, butterworth, chebyshev, Bessel, PSD and Bode plots 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...and PSD plots . SCIENCE OF TEST Measurement Accuracy - Data Sampling and Filter Selection during Data Acquisition David Kidman Air...Use both Bode and PSD plots to evaluate filter and sample rate effects prior to implementation Summary Reference: The Scientist and Engineer’s

  13. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    NASA Astrophysics Data System (ADS)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  14. Sparsity Is Better with Stability: Combining Accuracy and Stability for Model Selection in Brain Decoding

    PubMed Central

    Baldassarre, Luca; Pontil, Massimiliano; Mourão-Miranda, Janaina

    2017-01-01

    Structured sparse methods have received significant attention in neuroimaging. These methods allow the incorporation of domain knowledge through additional spatial and temporal constraints in the predictive model and carry the promise of being more interpretable than non-structured sparse methods, such as LASSO or Elastic Net methods. However, although sparsity has often been advocated as leading to more interpretable models it can also lead to unstable models under subsampling or slight changes of the experimental conditions. In the present work we investigate the impact of using stability/reproducibility as an additional model selection criterion1 on several different sparse (and structured sparse) methods that have been recently applied for fMRI brain decoding. We compare three different model selection criteria: (i) classification accuracy alone; (ii) classification accuracy and overlap between the solutions; (iii) classification accuracy and correlation between the solutions. The methods we consider include LASSO, Elastic Net, Total Variation, sparse Total Variation, Laplacian and Graph Laplacian Elastic Net (GraphNET). Our results show that explicitly accounting for stability/reproducibility during the model optimization can mitigate some of the instability inherent in sparse methods. In particular, using accuracy and overlap between the solutions as a joint optimization criterion can lead to solutions that are more similar in terms of accuracy, sparsity levels and coefficient maps even when different sparsity methods are considered. PMID:28261042

  15. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  16. A Robust Supervised Variable Selection for Noisy High-Dimensional Data

    PubMed Central

    Kalina, Jan; Schlenker, Anna

    2015-01-01

    The Minimum Redundancy Maximum Relevance (MRMR) approach to supervised variable selection represents a successful methodology for dimensionality reduction, which is suitable for high-dimensional data observed in two or more different groups. Various available versions of the MRMR approach have been designed to search for variables with the largest relevance for a classification task while controlling for redundancy of the selected set of variables. However, usual relevance and redundancy criteria have the disadvantages of being too sensitive to the presence of outlying measurements and/or being inefficient. We propose a novel approach called Minimum Regularized Redundancy Maximum Robust Relevance (MRRMRR), suitable for noisy high-dimensional data observed in two groups. It combines principles of regularization and robust statistics. Particularly, redundancy is measured by a new regularized version of the coefficient of multiple correlation and relevance is measured by a highly robust correlation coefficient based on the least weighted squares regression with data-adaptive weights. We compare various dimensionality reduction methods on three real data sets. To investigate the influence of noise or outliers on the data, we perform the computations also for data artificially contaminated by severe noise of various forms. The experimental results confirm the robustness of the method with respect to outliers. PMID:26137474

  17. Considerations on selected reaction monitoring experiments: implications for the selectivity and accuracy of measurements.

    PubMed

    Domon, Bruno

    2012-12-01

    Targeted MS analyses based on selected reaction monitoring (SRM) has enabled significant achievements in proteomic quantification, such that its application to clinical studies has augured great advancements for life sciences. The approach has been challenged by the complexity of clinical samples that affects the selectivity of measurements, in many cases limiting analytical performances to a larger extent than expected. This Personal Perspective discusses some insight to better comprehend the mismatch between the often underestimated sample complexity and the selectivity of SRM measurements performed on a triple quadrupole instrument. The implications for the design and evaluation of SRM assays are discussed and illustrated with selected examples, providing a baseline for a more critical use of the technique in the context of clinical samples and to evaluate alternative methods.

  18. Accuracy and responses of genomic selection on key traits in apple breeding

    PubMed Central

    Muranty, Hélène; Troggio, Michela; Sadok, Inès Ben; Rifaï, Mehdi Al; Auwerkerken, Annemarie; Banchi, Elisa; Velasco, Riccardo; Stevanato, Piergiorgio; van de Weg, W Eric; Di Guardo, Mario; Kumar, Satish; Laurens, François; Bink, Marco C A M

    2015-01-01

    The application of genomic selection in fruit tree crops is expected to enhance breeding efficiency by increasing prediction accuracy, increasing selection intensity and decreasing generation interval. The objectives of this study were to assess the accuracy of prediction and selection response in commercial apple breeding programmes for key traits. The training population comprised 977 individuals derived from 20 pedigreed full-sib families. Historic phenotypic data were available on 10 traits related to productivity and fruit external appearance and genotypic data for 7829 SNPs obtained with an Illumina 20K SNP array. From these data, a genome-wide prediction model was built and subsequently used to calculate genomic breeding values of five application full-sib families. The application families had genotypes at 364 SNPs from a dedicated 512 SNP array, and these genotypic data were extended to the high-density level by imputation. These five families were phenotyped for 1 year and their phenotypes were compared to the predicted breeding values. Accuracy of genomic prediction across the 10 traits reached a maximum value of 0.5 and had a median value of 0.19. The accuracies were strongly affected by the phenotypic distribution and heritability of traits. In the largest family, significant selection response was observed for traits with high heritability and symmetric phenotypic distribution. Traits that showed non-significant response often had reduced and skewed phenotypic variation or low heritability. Among the five application families the accuracies were uncorrelated to the degree of relatedness to the training population. The results underline the potential of genomic prediction to accelerate breeding progress in outbred fruit tree crops that still need to overcome long generation intervals and extensive phenotyping costs. PMID:26744627

  19. Estimation of accuracies and expected genetic change from selection for selection indexes that use multiple-trait predictions of breeding values.

    PubMed

    Barwick, S A; Tier, B; Swan, A A; Henzell, A L

    2013-10-01

    Procedures are described for estimating selection index accuracies for individual animals and expected genetic change from selection for the general case where indexes of EBVs predict an aggregate breeding objective of traits that may or may not have been measured. Index accuracies for the breeding objective are shown to take an important general form, being able to be expressed as the product of the accuracy of the index function of true breeding values and the accuracy with which that function predicts the breeding objective. When the accuracies of the individual EBVs of the index are known, prediction error variances (PEVs) and covariances (PECs) for the EBVs within animal are able to be well approximated, and index accuracies and expected genetic change from selection estimated with high accuracy. The procedures are suited to routine use in estimating index accuracies in genetic evaluation, and for providing important information, without additional modelling, on the directions in which a population will move under selection.

  20. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    PubMed Central

    Mortazavi, Atiyeh; Moattar, Mohammad Hossein

    2016-01-01

    High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI) for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches. PMID:27127506

  1. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information.

    PubMed

    Mortazavi, Atiyeh; Moattar, Mohammad Hossein

    2016-01-01

    High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI) for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  2. Validation of selection accuracy for the total number of piglets born in Landrace pigs using genomic selection

    PubMed Central

    Oh, Jae-Don; Na, Chong-Sam; Park, Kyung-Do

    2017-01-01

    Objective This study was to determine the relationship between estimated breeding value and phenotype information after farrowing when juvenile selection was made in candidate pigs without phenotype information. Methods After collecting phenotypic and genomic information for the total number of piglets born by Landrace pigs, selection accuracy between genomic breeding value estimates using genomic information and breeding value estimates of best linear unbiased prediction (BLUP) using conventional pedigree information were compared. Results Genetic standard deviation (σa) for the total number of piglets born was 0.91. Since the total number of piglets born for candidate pigs was unknown, the accuracy of the breeding value estimated from pedigree information was 0.080. When genomic information was used, the accuracy of the breeding value was 0.216. Assuming that the replacement rate of sows per year is 100% and generation interval is 1 year, genetic gain per year is 0.346 head when genomic information is used. It is 0.128 when BLUP is used. Conclusion Genetic gain estimated from single step best linear unbiased prediction (ssBLUP) method is by 2.7 times higher than that the one estimated from BLUP method, i.e., 270% more improvement in efficiency. PMID:27507181

  3. Robust Inference from Conditional Logistic Regression Applied to Movement and Habitat Selection Analysis

    PubMed Central

    Duchesne, Thierry; Fortin, Daniel

    2017-01-01

    Conditional logistic regression (CLR) is widely used to analyze habitat selection and movement of animals when resource availability changes over space and time. Observations used for these analyses are typically autocorrelated, which biases model-based variance estimation of CLR parameters. This bias can be corrected using generalized estimating equations (GEE), an approach that requires partitioning the data into independent clusters. Here we establish the link between clustering rules in GEE and their effectiveness to remove statistical biases in variance estimation of CLR parameters. The current lack of guidelines is such that broad variation in clustering rules can be found among studies (e.g., 14–450 clusters) with unknown consequences on the robustness of statistical inference. We simulated datasets reflecting conditions typical of field studies. Longitudinal data were generated based on several parameters of habitat selection with varying strength of autocorrelation and some individuals having more observations than others. We then evaluated how changing the number of clusters impacted the effectiveness of variance estimators. Simulations revealed that 30 clusters were sufficient to get unbiased and relatively precise estimates of variance of parameter estimates. The use of destructive sampling to increase the number of independent clusters was successful at removing statistical bias, but only when observations were temporally autocorrelated and the strength of inter-individual heterogeneity was weak. GEE also provided robust estimates of variance for different magnitudes of unbalanced datasets. Our simulations demonstrate that GEE should be estimated by assigning each individual to a cluster when at least 30 animals are followed, or by using destructive sampling for studies with fewer individuals having intermediate level of behavioural plasticity in selection and temporally autocorrelated observations. The simulations provide valuable information to

  4. Robust hyperpolarized (13)C metabolic imaging with selective non-excitation of pyruvate (SNEP).

    PubMed

    Chen, Way Cherng; Teo, Xing Qi; Lee, Man Ying; Radda, George K; Lee, Philip

    2015-08-01

    In vivo metabolic imaging using hyperpolarized [1-(13)C]pyruvate provides localized biochemical information and is particularly useful in detecting early disease changes, as well as monitoring disease progression and treatment response. However, a major limitation of hyperpolarized magnetization is its unrecoverable decay, due not only to T1 relaxation but also to radio-frequency (RF) excitation. RF excitation schemes used in metabolic imaging must therefore be able to utilize available hyperpolarized magnetization efficiently and robustly for the optimal detection of substrate and metabolite activities. In this work, a novel RF excitation scheme called selective non-excitation of pyruvate (SNEP) is presented. This excitation scheme involves the use of a spectral selective RF pulse to specifically exclude the excitation of [1-(13)C]pyruvate, while uniformly exciting the key metabolites of interest (namely [1-(13)C]lactate and [1-(13)C]alanine) and [1-(13)C]pyruvate-hydrate. By eliminating the loss of hyperpolarized [1-(13)C]pyruvate magnetization due to RF excitation, the signal from downstream metabolite pools is increased together with enhanced dynamic range. Simulation results, together with phantom measurements and in vivo experiments, demonstrated the improvement in signal-to-noise ratio (SNR) and the extension of the lifetime of the [1-(13)C]lactate and [1-(13)C]alanine pools when compared with conventional non-spectral selective (NS) excitation. SNEP has also been shown to perform comparably well with multi-band (MB) excitation, yet SNEP possesses distinct advantages, including ease of implementation, less stringent demands on gradient performance, increased robustness to frequency drifts and B0 inhomogeneity as well as easier quantification involving the use of [1-(13)C]pyruvate-hydrate as a proxy for the actual [1-(13)C] pyruvate signal. SNEP is therefore a promising alternative for robust hyperpolarized [1-(13)C]pyruvate metabolic imaging with high

  5. Sensitive capillary GC-MS-SIM determination of selective serotonin reuptake inhibitors: reliability evaluation by validation and robustness study.

    PubMed

    Berzas Nevado, Juan José; Villaseñor Llerena, María Jesús; Guiberteau Cabanillas, Carmen; Rodríguez Robledo, Virginia; Buitrago, Sierra

    2006-01-01

    A simple, fast, selective and very sensitive capillary GC-MS method for the simultaneous determination of five antidepressant drugs is described. Fluoxetine, fluvoxamine, citalopram, sertraline and paroxetine belong to the newest and most important drug group termed selective serotonin reuptake inhibitors. Imipramine was used in this method as an internal standard for quantification. Optimum parameters for GC separation were investigated, i.e., flow rate, column head pressure, injector temperature, injection splitless conditions and oven temperature program. MS detection was performed in SIM mode to increase the sensitivity. Stability of the solutions, linear concentration range, accuracy, precision, LOD, LOQ (3.6-41.5 mg/L) and specificity were examined in the presence of excipients for checking the reliability of this method. The robustness was evaluated with a matrix of 15 experiments (seven factors and three levels) using Plackett-Burman fractional factorial experimental design, and Youden and Steiner statistical treatment. The method was applied to the analysis of these antidepressants in nearly all their pharmaceutical formulations, obtaining recoveries between 98.1% and 102.7% with regard to the claimed values.

  6. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models

    PubMed Central

    de Azevedo Peixoto, Leonardo; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits. PMID:28296913

  7. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    PubMed

    Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  8. Comparative accuracy of the Albedo, transmission and absorption for selected radiative transfer approximations

    NASA Technical Reports Server (NTRS)

    King, M. D.; HARSHVARDHAN

    1986-01-01

    Illustrations of both the relative and absolute accuracy of eight different radiative transfer approximations as a function of optical thickness, solar zenith angle and single scattering albedo are given. Computational results for the plane albedo, total transmission and fractional absorption were obtained for plane-parallel atmospheres composed of cloud particles. These computations, which were obtained using the doubling method, are compared with comparable results obtained using selected radiative transfer approximations. Comparisons were made between asymptotic theory for thick layers and the following widely used two stream approximations: Coakley-Chylek's models 1 and 2, Meador-Weaver, Eddington, delta-Eddington, PIFM and delta-discrete ordinates.

  9. Robust selectivity for faces in the human amygdala in the absence of expressions.

    PubMed

    Mende-Siedlecki, Peter; Verosky, Sara C; Turk-Browne, Nicholas B; Todorov, Alexander

    2013-12-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region.

  10. Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy.

    PubMed

    Aljabar, P; Heckemann, R A; Hammers, A; Hajnal, J V; Rueckert, D

    2009-07-01

    Quantitative research in neuroimaging often relies on anatomical segmentation of human brain MR images. Recent multi-atlas based approaches provide highly accurate structural segmentations of the brain by propagating manual delineations from multiple atlases in a database to a query subject and combining them. The atlas databases which can be used for these purposes are growing steadily. We present a framework to address the consequent problems of scale in multi-atlas segmentation. We show that selecting a custom subset of atlases for each query subject provides more accurate subcortical segmentations than those given by non-selective combination of random atlas subsets. Using a database of 275 atlases, we tested an image-based similarity criterion as well as a demographic criterion (age) in a leave-one-out cross-validation study. Using a custom ranking of the database for each subject, we combined a varying number n of atlases from the top of the ranked list. The resulting segmentations were compared with manual reference segmentations using Dice overlap. Image-based selection provided better segmentations than random subsets (mean Dice overlap 0.854 vs. 0.811 for the estimated optimal subset size, n=20). Age-based selection resulted in a similar marked improvement. We conclude that selecting atlases from large databases for atlas-based brain image segmentation improves the accuracy of the segmentations achieved. We show that image similarity is a suitable selection criterion and give results based on selecting atlases by age that demonstrate the value of meta-information for selection.

  11. Individual variation in exploratory behaviour improves speed and accuracy of collective nest selection by Argentine ants

    PubMed Central

    Hui, Ashley; Pinter-Wollman, Noa

    2014-01-01

    Collective behaviours are influenced by the behavioural composition of the group. For example, a collective behaviour may emerge from the average behaviour of the group's constituents, or be driven by a few key individuals that catalyse the behaviour of others in the group. When ant colonies collectively relocate to a new nest site, there is an inherent trade-off between the speed and accuracy of their decision of where to move due to the time it takes to gather information. Thus, variation among workers in exploratory behaviour, which allows gathering information about potential new nest sites, may impact the ability of a colony to move quickly into a suitable new nest. The invasive Argentine ant, Linepithema humile, expands its range locally through the dispersal and establishment of propagules: groups of ants and queens. We examine whether the success of these groups in rapidly finding a suitable nest site is affected by their behavioural composition. We compared nest choice speed and accuracy among groups of all-exploratory, all-nonexploratory and half-exploratory–half-nonexploratory individuals. We show that exploratory individuals improve both the speed and accuracy of collective nest choice, and that exploratory individuals have additive, not synergistic, effects on nest site selection. By integrating an examination of behaviour into the study of invasive species we shed light on the mechanisms that impact the progression of invasion. PMID:25018558

  12. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  13. Detecting recent positive selection with high accuracy and reliability by conditional coalescent tree.

    PubMed

    Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Jin, Li; He, Yungang

    2014-11-01

    Studies of natural selection, followed by functional validation, are shedding light on understanding of genetic mechanisms underlying human evolution and adaptation. Classic methods for detecting selection, such as the integrated haplotype score (iHS) and Fay and Wu's H statistic, are useful for candidate gene searching underlying positive selection. These methods, however, have limited capability to localize causal variants in selection target regions. In this study, we developed a novel method based on conditional coalescent tree to detect recent positive selection by counting unbalanced mutations on coalescent gene genealogies. Extensive simulation studies revealed that our method is more robust than many other approaches against biases due to various demographic effects, including population bottleneck, expansion, or stratification, while not sacrificing its power. Furthermore, our method demonstrated its superiority in localizing causal variants from massive linked genetic variants. The rate of successful localization was about 20-40% higher than that of other state-of-the-art methods on simulated data sets. On empirical data, validated functional causal variants of four well-known positive selected genes were all successfully localized by our method, such as ADH1B, MCM6, APOL1, and HBB. Finally, the computational efficiency of this new method was much higher than that of iHS implementations, that is, 24-66 times faster than the REHH package, and more than 10,000 times faster than the original iHS implementation. These magnitudes make our method suitable for applying on large sequencing data sets. Software can be downloaded from https://github.com/wavefancy/scct.

  14. Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Pham Dinh, Tao

    2014-11-01

    In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.

  15. A Balanced Accuracy Fitness Function Leads to Robust Analysis using Grammatical Evolution Neural Networks in the Case of Class Imbalance.

    PubMed

    Hardison, Nicholas E; Fanelli, Theresa J; Dudek, Scott M; Reif, David M; Ritchie, Marylyn D; Motsinger-Reif, Alison A

    2008-01-01

    Grammatical Evolution Neural Networks (GENN) is a computational method designed to detect gene-gene interactions in genetic epidemiology, but has so far only been evaluated in situations with balanced numbers of cases and controls. Real data, however, rarely has such perfectly balanced classes. In the current study, we test the power of GENN to detect interactions in data with a range of class imbalance using two fitness functions (classification error and balanced error), as well as data re-sampling. We show that when using classification error, class imbalance greatly decreases the power of GENN. Re-sampling methods demonstrated improved power, but using balanced accuracy resulted in the highest power. Based on the results of this study, balanced error has replaced classification error in the GENN algorithm.

  16. Theory-assisted development of a robust and Z-selective olefin metathesis catalyst.

    PubMed

    Occhipinti, Giovanni; Koudriavtsev, Vitali; Törnroos, Karl W; Jensen, Vidar R

    2014-08-07

    DFT calculations have predicted a new, highly Z-selective ruthenium-based olefin metathesis catalyst that is considerably more robust than the recently reported (SIMes)(Cl)(RS)RuCH(o-OiPrC6H4) (3a, SIMes = 1,3-dimesityl-4,5-dihydroimidazol-2-ylidene, R = 2,4,6-triphenylbenzene) [J. Am. Chem. Soc., 2013, 135, 3331]. Replacing the chloride of 3a by an isocyanate ligand to give 5a was predicted to increase the stability of the complex considerably, at the same time moderately improving the Z-selectivity. Compound 5a is easily prepared in a two-step synthesis starting from the Hoveyda-Grubbs second-generation catalyst 3. In agreement with the calculations, the isocyanate-substituted 5a appears to be somewhat more Z-selective than the chloride analogue 3a. More importantly, 5a can be used in air, with unpurified and non-degassed substrates and solvents, and in the presence of acids. These are traits that are unprecedented among highly Z-selective olefin metathesis catalysts and also very promising with respect to applications of the new catalyst.

  17. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  18. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  19. How can selection of biologically inspired features improve the performance of a robust object recognition model?

    PubMed

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.

  20. How Can Selection of Biologically Inspired Features Improve the Performance of a Robust Object Recognition Model?

    PubMed Central

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition. PMID:22384229

  1. Persistency of Prediction Accuracy and Genetic Gain in Synthetic Populations Under Recurrent Genomic Selection.

    PubMed

    Müller, Dominik; Schopp, Pascal; Melchinger, Albrecht E

    2017-03-10

    Recurrent selection (RS) has been used in plant breeding to successively improve synthetic and other multiparental populations. Synthetics are generated from a limited number of parents [Formula: see text] but little is known about how [Formula: see text] affects genomic selection (GS) in RS, especially the persistency of prediction accuracy ([Formula: see text]) and genetic gain. Synthetics were simulated by intermating [Formula: see text]= 2-32 parent lines from an ancestral population with short- or long-range linkage disequilibrium ([Formula: see text]) and subjected to multiple cycles of GS. We determined [Formula: see text] and genetic gain across 30 cycles for different training set (TS) sizes, marker densities, and generations of recombination before model training. Contributions to [Formula: see text] and genetic gain from pedigree relationships, as well as from cosegregation and [Formula: see text] between QTL and markers, were analyzed via four scenarios differing in (i) the relatedness between TS and selection candidates and (ii) whether selection was based on markers or pedigree records. Persistency of [Formula: see text] was high for small [Formula: see text] where predominantly cosegregation contributed to [Formula: see text], but also for large [Formula: see text] where [Formula: see text] replaced cosegregation as the dominant information source. Together with increasing genetic variance, this compensation resulted in relatively constant long- and short-term genetic gain for increasing [Formula: see text] > 4, given long-range LDA in the ancestral population. Although our scenarios suggest that information from pedigree relationships contributed to [Formula: see text] for only very few generations in GS, we expect a longer contribution than in pedigree BLUP, because capturing Mendelian sampling by markers reduces selective pressure on pedigree relationships. Larger TS size ([Formula: see text]) and higher marker density improved persistency of

  2. Persistency of Prediction Accuracy and Genetic Gain in Synthetic Populations Under Recurrent Genomic Selection

    PubMed Central

    Müller, Dominik; Schopp, Pascal; Melchinger, Albrecht E.

    2017-01-01

    Recurrent selection (RS) has been used in plant breeding to successively improve synthetic and other multiparental populations. Synthetics are generated from a limited number of parents (Np), but little is known about how Np affects genomic selection (GS) in RS, especially the persistency of prediction accuracy (rg,g^) and genetic gain. Synthetics were simulated by intermating Np= 2–32 parent lines from an ancestral population with short- or long-range linkage disequilibrium (LDA) and subjected to multiple cycles of GS. We determined rg,g^ and genetic gain across 30 cycles for different training set (TS) sizes, marker densities, and generations of recombination before model training. Contributions to rg,g^ and genetic gain from pedigree relationships, as well as from cosegregation and LDA between QTL and markers, were analyzed via four scenarios differing in (i) the relatedness between TS and selection candidates and (ii) whether selection was based on markers or pedigree records. Persistency of rg,g^ was high for small Np, where predominantly cosegregation contributed to rg,g^, but also for large Np, where LDA replaced cosegregation as the dominant information source. Together with increasing genetic variance, this compensation resulted in relatively constant long- and short-term genetic gain for increasing Np > 4, given long-range LDA in the ancestral population. Although our scenarios suggest that information from pedigree relationships contributed to rg,g^ for only very few generations in GS, we expect a longer contribution than in pedigree BLUP, because capturing Mendelian sampling by markers reduces selective pressure on pedigree relationships. Larger TS size (NTS) and higher marker density improved persistency of rg,g^ and hence genetic gain, but additional recombinations could not increase genetic gain. PMID:28064189

  3. Estimation of genetic parameters and breeding values across challenged environments to select for robust pigs.

    PubMed

    Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A

    2015-04-01

    Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments

  4. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  5. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  6. Sacrificing information for the greater good: how to select photometric bands for optimal accuracy

    NASA Astrophysics Data System (ADS)

    Stensbo-Smidt, Kristoffer; Gieseke, Fabian; Igel, Christian; Zirm, Andrew; Steenstrup Pedersen, Kim

    2017-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore, it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study promotes a feature selection algorithm, which selects the most informative magnitudes and colours for a given task of estimating physical quantities from photometric data alone. Using k-nearest neighbours regression, a well-known non-parametric machine learning method, we show that using the found features significantly increases the accuracy of the estimations compared to using standard features and standard methods. We illustrate the usefulness of the approach by estimating specific star formation rates (sSFRs) and redshifts (photo-z's) using only the broad-band photometry from the Sloan Digital Sky Survey (SDSS). For estimating sSFRs, we demonstrate that our method produces better estimates than traditional spectral energy distribution fitting. For estimating photo-z's, we show that our method produces more accurate photo-z's than the method employed by SDSS. The study highlights the general importance of performing proper model selection to improve the results of machine learning systems and how feature selection can provide insights into the predictive relevance of particular input features.

  7. Model-based selection of the robust JAK-STAT activation mechanism.

    PubMed

    Rybiński, Mikołaj; Gambin, Anna

    2012-09-21

    JAK-STAT pathway family is a principal signaling mechanism in eukaryotic cells. Evolutionary conserved roles of this mechanism include control over fundamental processes such as cell growth or apoptosis. Deregulation of the JAK-STAT signaling is frequently associated with cancerogenesis. JAK-STAT pathways become hyper-activated in many human tumors. Therefore, components of these pathways are an attractive target for drugs, which design requires as adequate models as possible. Although, in principle, JAK-STAT signaling is relatively simple, the ambiguities in a receptor activation prevent a clear explanation of the underlying molecular mechanism. Here, we compare four variants of a computational model of the JAK1/2-STAT1 signaling pathway. These variants capture known, basic discrepancies in the mechanism of activation of a cytokine receptor, in the context of all key components of the pathway. We carry out a comparative analysis using mass action kinetics. The investigated differences are so marginal that all models satisfy a goodness of fit criteria to the extent that the state of the art Bayesian model selection (BMS) method fails to significantly promote one model. Therefore, we comparatively investigate changes in a robustness of the JAK1/2-STAT1 pathway variants using the global sensitivity analysis method (GSA), complemented with the identifiability analysis (IA). Both BMS and GSA are used to analyze the models for the varying parameter values. We found out that, both BMS and GSA, narrowed down to the receptor activation component, slightly promote the least complex model. Further, insightful, comprehensive GSA, motivated by the concept of robustness, allowed us to show that the precise order of reactions of a ligand binding and a receptor dimerization is not as important as the on-membrane pre-assembly of the dimers in the absence of ligand. The main value of this work is an evaluation of the usefulness of different model selection methods in a frequently

  8. Slice-selective broadband refocusing pulses for the robust generation of crushed spin-echoes.

    PubMed

    Janich, Martin A; McLean, Mary A; Noeske, Ralph; Glaser, Steffen J; Schulte, Rolf F

    2012-10-01

    A major challenge for in vivo magnetic resonance spectroscopy with point-resolved spectroscopy (PRESS) is the low signal intensity for the measurement of weakly scalar coupled spins, for example lactate. The chemical-shift displacement error between the two coupling partners of the lactate molecule leads to a signal decrease. The chemical-shift displacement error is decreased and therefore the lactate signal is increased by using refocusing pulses with a broad bandwidth. Previously, slice-selective broadband universal rotation pulses (S-BURBOP) were designed and applied as refocusing pulses in the PRESS pulse sequence (Janich MA, et al., Journal of Magnetic Resonance, 2011, 213, 126-135). However, S-BURBOP pulses leave a phase error across the slice which is superimposed on the spectra when spatially resolving the PRESS voxel. In the present novel design of slice-selective broadband refocusing pulses (S-BREBOP) this phase error is avoided. S-BREBOP pulses obtain 2.5 times the bandwidth of conventional Shinnar-Le Roux pulses and are robust against ±20% miscalibration of the B(1) amplitude. S-BREBOP pulses were validated in phantoms and in a low-grade brain tumor of a patient. Compared to conventional Shinnar-Le Roux pulses they lead to a decrease of the chemical-shift displacement error and consequently a lactate signal increase.

  9. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration.

    PubMed

    Kim, Jinkoo; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho

    2005-04-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration.

  10. The Effects of Various Item Selection Methods on the Classification Accuracy and Classification Consistency of Criterion-Referenced Instruments.

    ERIC Educational Resources Information Center

    Smith, Douglas U.

    This study examined the effects of certain item selection methods on the classification accuracy and classification consistency of criterion-referenced instruments. Three item response data sets, representing varying situations of instructional effectiveness, were simulated. Five methods of item selection were then applied to each data set for the…

  11. Automated data selection method to improve robustness of diffuse optical tomography for breast cancer imaging

    PubMed Central

    Vavadi, Hamed; Zhu, Quing

    2016-01-01

    Imaging-guided near infrared diffuse optical tomography (DOT) has demonstrated a great potential as an adjunct modality for differentiation of malignant and benign breast lesions and for monitoring treatment response of breast cancers. However, diffused light measurements are sensitive to artifacts caused by outliers and errors in measurements due to probe-tissue coupling, patient and probe motions, and tissue heterogeneity. In general, pre-processing of the measurements is needed by experienced users to manually remove these outliers and therefore reduce imaging artifacts. An automated method of outlier removal, data selection, and filtering for diffuse optical tomography is introduced in this manuscript. This method consists of multiple steps to first combine several data sets collected from the same patient at contralateral normal breast and form a single robust reference data set using statistical tests and linear fitting of the measurements. The second step improves the perturbation measurements by filtering out outliers from the lesion site measurements using model based analysis. The results of 20 malignant and benign cases show similar performance between manual data processing and automated processing and improvement in tissue characterization of malignant to benign ratio by about 27%. PMID:27867711

  12. Robust Selection of Cancer Survival Signatures from High-Throughput Genomic Data Using Two-Fold Subsampling

    PubMed Central

    Lee, Sangkyun; Rahnenführer, Jörg; Lang, Michel; De Preter, Katleen; Mestdagh, Pieter; Koster, Jan; Versteeg, Rogier; Stallings, Raymond L.; Varesio, Luigi; Asgharzadeh, Shahab; Schulte, Johannes H.; Fielitz, Kathrin; Schwermer, Melanie; Morik, Katharina; Schramm, Alexander

    2014-01-01

    Identifying relevant signatures for clinical patient outcome is a fundamental task in high-throughput studies. Signatures, composed of features such as mRNAs, miRNAs, SNPs or other molecular variables, are often non-overlapping, even though they have been identified from similar experiments considering samples with the same type of disease. The lack of a consensus is mostly due to the fact that sample sizes are far smaller than the numbers of candidate features to be considered, and therefore signature selection suffers from large variation. We propose a robust signature selection method that enhances the selection stability of penalized regression algorithms for predicting survival risk. Our method is based on an aggregation of multiple, possibly unstable, signatures obtained with the preconditioned lasso algorithm applied to random (internal) subsamples of a given cohort data, where the aggregated signature is shrunken by a simple thresholding strategy. The resulting method, RS-PL, is conceptually simple and easy to apply, relying on parameters automatically tuned by cross validation. Robust signature selection using RS-PL operates within an (external) subsampling framework to estimate the selection probabilities of features in multiple trials of RS-PL. These probabilities are used for identifying reliable features to be included in a signature. Our method was evaluated on microarray data sets from neuroblastoma, lung adenocarcinoma, and breast cancer patients, extracting robust and relevant signatures for predicting survival risk. Signatures obtained by our method achieved high prediction performance and robustness, consistently over the three data sets. Genes with high selection probability in our robust signatures have been reported as cancer-relevant. The ordering of predictor coefficients associated with signatures was well-preserved across multiple trials of RS-PL, demonstrating the capability of our method for identifying a transferable consensus signature

  13. The effects of demography and long-term selection on the accuracy of genomic prediction with sequence data.

    PubMed

    MacLeod, Iona M; Hayes, Ben J; Goddard, Michael E

    2014-12-01

    The use of dense SNPs to predict the genetic value of an individual for a complex trait is often referred to as "genomic selection" in livestock and crops, but is also relevant to human genetics to predict, for example, complex genetic disease risk. The accuracy of prediction depends on the strength of linkage disequilibrium (LD) between SNPs and causal mutations. If sequence data were used instead of dense SNPs, accuracy should increase because causal mutations are present, but demographic history and long-term negative selection also influence accuracy. We therefore evaluated genomic prediction, using simulated sequence in two contrasting populations: one reducing from an ancestrally large effective population size (Ne) to a small one, with high LD common in domestic livestock, while the second had a large constant-sized Ne with low LD similar to that in some human or outbred plant populations. There were two scenarios in each population; causal variants were either neutral or under long-term negative selection. For large Ne, sequence data led to a 22% increase in accuracy relative to ∼600K SNP chip data with a Bayesian analysis and a more modest advantage with a BLUP analysis. This advantage increased when causal variants were influenced by negative selection, and accuracy persisted when 10 generations separated reference and validation populations. However, in the reducing Ne population, there was little advantage for sequence even with negative selection. This study demonstrates the joint influence of demography and selection on accuracy of prediction and improves our understanding of how best to exploit sequence for genomic prediction.

  14. Robustness and Accuracy of Feature-Based Single Image 2-D–3-D Registration Without Correspondences for Image-Guided Intervention

    PubMed Central

    Armand, Mehran; Otake, Yoshito; Yau, Wai-Pan; Cheung, Paul Y. S.; Hu, Yong; Taylor, Russell H.

    2015-01-01

    2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D–3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D–3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane (X –Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences. PMID:23955696

  15. Pertussis Toxin Is a Robust and Selective Inhibitor of High Grade Glioma Cell Migration and Invasion

    PubMed Central

    Wang, Lei; Natali, Letizia; Karimi-Mostowfi, Nicki; Brifault, Coralie; Gonias, Steven L.

    2016-01-01

    In high grade glioma (HGG), extensive tumor cell infiltration of normal brain typically precludes identifying effective margins for surgical resection or irradiation. Pertussis toxin (PT) is a multimeric complex that inactivates diverse Gi/o G-protein coupled receptors (GPCRs). Despite the broad continuum of regulatory events controlled by GPCRs, PT may be applicable as a therapeutic. We have shown that the urokinase receptor (uPAR) is a major driver of HGG cell migration. uPAR-initiated cell-signaling requires a Gi/o GPCR, N-formyl Peptide Receptor 2 (FPR2), as an essential co-receptor and is thus, PT-sensitive. Herein, we show that PT robustly inhibits migration of three separate HGG-like cell lines that express a mutated form of the EGF Receptor (EGFR), EGFRvIII, which is constitutively active. PT also almost completely blocked the ability of HGG cells to invade Matrigel. In the equivalent concentration range (0.01–1.0 μg/mL), PT had no effect on cell survival and only affected proliferation of one cell line. Neutralization of EGFRvIII expression in HGG cells, which is known to activate uPAR-initiated cell-signaling, promoted HGG cell migration. The increase in HGG cell migration, induced by EGFRvIII neutralization, was entirely blocked by silencing FPR2 gene expression or by treating the cells with PT. When U87MG HGG cells were cultured as suspended neurospheres in serum-free, growth factor-supplemented medium, uPAR expression was increased. HGG cells isolated from neurospheres migrated through Transwell membranes without loss of cell contacts; this process was inhibited by PT by >90%. PT also inhibited expression of vimentin by HGG cells; vimentin is associated with epithelial-mesenchymal transition and worsened prognosis. We conclude that PT may function as a selective inhibitor of HGG cell migration and invasion. PMID:27977780

  16. Optimal energy window selection of a CZT-based small-animal SPECT for quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Park, Su-Jin; Yu, A. Ram; Choi, Yun Young; Kim, Kyeong Min; Kim, Hee-Joung

    2015-05-01

    Cadmium zinc telluride (CZT)-based small-animal single-photon emission computed tomography (SPECT) has desirable characteristics such as superior energy resolution, but data acquisition for SPECT imaging has been widely performed with a conventional energy window. The aim of this study was to determine the optimal energy window settings for technetium-99 m (99mTc) and thallium-201 (201Tl), the most commonly used isotopes in SPECT imaging, using CZT-based small-animal SPECT for quantitative accuracy. We experimentally investigated quantitative measurements with respect to primary count rate, contrast-to-noise ratio (CNR), and scatter fraction (SF) within various energy window settings using Triumph X-SPECT. The two ways of energy window settings were considered: an on-peak window and an off-peak window. In the on-peak window setting, energy centers were set on the photopeaks. In the off-peak window setting, the ratios of energy differences between the photopeak from the lower- and higher-threshold varied from 4:6 to 3:7. In addition, the energy-window width for 99mTc varied from 5% to 20%, and that for 201Tl varied from 10% to 30%. The results of this study enabled us to determine the optimal energy windows for each isotope in terms of primary count rate, CNR, and SF. We selected the optimal energy window that increases the primary count rate and CNR while decreasing SF. For 99mTc SPECT imaging, the energy window of 138-145 keV with a 5% width and off-peak ratio of 3:7 was determined to be the optimal energy window. For 201Tl SPECT imaging, the energy window of 64-85 keV with a 30% width and off-peak ratio of 3:7 was selected as the optimal energy window. Our results demonstrated that the proper energy window should be carefully chosen based on quantitative measurements in order to take advantage of desirable characteristics of CZT-based small-animal SPECT. These results provided valuable reference information for the establishment of new protocol for CZT

  17. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  18. Curved Microneedle Array-Based sEMG Electrode for Robust Long-Term Measurements and High Selectivity

    PubMed Central

    Kim, Minjae; Kim, Taewan; Kim, Dong Sung; Chung, Wan Kyun

    2015-01-01

    Surface electromyography is widely used in many fields to infer human intention. However, conventional electrodes are not appropriate for long-term measurements and are easily influenced by the environment, so the range of applications of sEMG is limited. In this paper, we propose a flexible band-integrated, curved microneedle array electrode for robust long-term measurements, high selectivity, and easy applicability. Signal quality, in terms of long-term usability and sensitivity to perspiration, was investigated. Its motion-discriminating performance was also evaluated. The results show that the proposed electrode is robust to perspiration and can maintain a high-quality measuring ability for over 8 h. The proposed electrode also has high selectivity for motion compared with a commercial wet electrode and dry electrode. PMID:26153773

  19. Expertise Effects in Face-Selective Areas are Robust to Clutter and Diverted Attention, but not to Competition.

    PubMed

    McGugin, Rankin Williams; Van Gulick, Ana E; Tamber-Rosenau, Benjamin J; Ross, David A; Gauthier, Isabel

    2015-09-01

    Expertise effects for nonface objects in face-selective brain areas may reflect stable aspects of neuronal selectivity that determine how observers perceive objects. However, bottom-up (e.g., clutter from irrelevant objects) and top-down manipulations (e.g., attentional selection) can influence activity, affecting the link between category selectivity and individual performance. We test the prediction that individual differences expressed as neural expertise effects for cars in face-selective areas are sufficiently stable to survive clutter and manipulations of attention. Additionally, behavioral work and work using event related potentials suggest that expertise effects may not survive competition; we investigate this using functional magnetic resonance imaging. Subjects varying in expertise with cars made 1-back decisions about cars, faces, and objects in displays containing one or 2 objects, with only one category attended. Univariate analyses suggest car expertise effects are robust to clutter, dampened by reducing attention to cars, but nonetheless more robust to manipulations of attention than competition. While univariate expertise effects are severely abolished by competition between cars and faces, multivariate analyses reveal new information related to car expertise. These results demonstrate that signals in face-selective areas predict expertise effects for nonface objects in a variety of conditions, although individual differences may be expressed in different dependent measures depending on task and instructions.

  20. Expertise Effects in Face-Selective Areas are Robust to Clutter and Diverted Attention, but not to Competition

    PubMed Central

    McGugin, Rankin Williams; Van Gulick, Ana E.; Tamber-Rosenau, Benjamin J.; Ross, David A.; Gauthier, Isabel

    2015-01-01

    Expertise effects for nonface objects in face-selective brain areas may reflect stable aspects of neuronal selectivity that determine how observers perceive objects. However, bottom-up (e.g., clutter from irrelevant objects) and top-down manipulations (e.g., attentional selection) can influence activity, affecting the link between category selectivity and individual performance. We test the prediction that individual differences expressed as neural expertise effects for cars in face-selective areas are sufficiently stable to survive clutter and manipulations of attention. Additionally, behavioral work and work using event related potentials suggest that expertise effects may not survive competition; we investigate this using functional magnetic resonance imaging. Subjects varying in expertise with cars made 1-back decisions about cars, faces, and objects in displays containing one or 2 objects, with only one category attended. Univariate analyses suggest car expertise effects are robust to clutter, dampened by reducing attention to cars, but nonetheless more robust to manipulations of attention than competition. While univariate expertise effects are severely abolished by competition between cars and faces, multivariate analyses reveal new information related to car expertise. These results demonstrate that signals in face-selective areas predict expertise effects for nonface objects in a variety of conditions, although individual differences may be expressed in different dependent measures depending on task and instructions. PMID:24682187

  1. Genomic selection accuracy for grain quality traits in biparental wheat populations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) is a promising tool for plant and animal breeding that uses genome wide molecular marker data to capture small and large effect quantitative trait loci and predict the genetic value of selection candidates. Genomic selection has been shown previously to have higher prediction ...

  2. Screening Accuracy of Level 2 Autism Spectrum Disorder Rating Scales: A Review of Selected Instruments

    ERIC Educational Resources Information Center

    Norris, Megan; Lecavalier, Luc

    2010-01-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be…

  3. Robust factor selection in early cell culture process development for the production of a biosimilar monoclonal antibody.

    PubMed

    Sokolov, Michael; Ritscher, Jonathan; MacKinnon, Nicola; Bielser, Jean-Marc; Brühlmann, David; Rothenhäusler, Dominik; Thanei, Gian; Soos, Miroslav; Stettler, Matthieu; Souquet, Jonathan; Broly, Hervé; Morbidelli, Massimo; Butté, Alessandro

    2017-01-01

    This work presents a multivariate methodology combining principal component analysis, the Mahalanobis distance and decision trees for the selection of process factors and their levels in early process development of generic molecules. It is applied to a high throughput study testing more than 200 conditions for the production of a biosimilar monoclonal antibody at microliter scale. The methodology provides the most important selection criteria for the process design in order to improve product quality towards the quality attributes of the originator molecule. Robustness of the selections is ensured by cross-validation of each analysis step. The concluded selections are then successfully validated with an external data set. Finally, the results are compared to those obtained with a widely used software revealing similarities and clear advantages of the presented methodology. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:181-191, 2017.

  4. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison

    NASA Astrophysics Data System (ADS)

    van de Schoot, A. J. A. J.; Visser, J.; van Kesteren, Z.; Janssen, T. M.; Rasch, C. R. N.; Bel, A.

    2016-02-01

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D99%) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D99%, rectum V30Gy and bladder V40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D99% on average by 0.2 Gy and decreased the median rectum V30Gy and median bladder V40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in terms of

  5. VLT/SPHERE robust astrometry of the HR8799 planets at milliarcsecond-level accuracy. Orbital architecture analysis with PyAstrOFit

    NASA Astrophysics Data System (ADS)

    Wertz, O.; Absil, O.; Gómez González, C. A.; Milli, J.; Girard, J. H.; Mawet, D.; Pueyo, L.

    2017-02-01

    Context. HR8799 is orbited by at least four giant planets, making it a prime target for the recently commissioned Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT/SPHERE). As such, it was observed on five consecutive nights during the SPHERE science verification in December 2014. Aims: We aim to take full advantage of the SPHERE capabilities to derive accurate astrometric measurements based on H-band images acquired with the Infra-Red Dual-band Imaging and Spectroscopy (IRDIS) subsystem, and to explore the ultimate astrometric performance of SPHERE in this observing mode. We also aim to present a detailed analysis of the orbital parameters for the four planets. Methods: We performed thorough post-processing of the IRDIS images with the Vortex Imaging Processing (VIP) package to derive a robust astrometric measurement for the four planets. This includes the identification and careful evaluation of the different contributions to the error budget, including systematic errors. Combining our astrometric measurements with the ones previously published in the literature, we constrain the orbital parameters of the four planets using PyAstrOFit, our new open-source python package dedicated to orbital fitting using Bayesian inference with Monte-Carlo Markov Chain sampling. Results: We report the astrometric positions for epoch 2014.93 with an accuracy down to 2.0 mas, mainly limited by the astrometric calibration of IRDIS. For each planet, we derive the posterior probability density functions for the six Keplerian elements and identify sets of highly probable orbits. For planet d, there is clear evidence for nonzero eccentricity (e 0.35), without completely excluding solutions with smaller eccentricities. The three other planets are consistent with circular orbits, although their probability distributions spread beyond e = 0.2, and show a peak at e ≃ 0.1 for planet e. The four planets have consistent inclinations of approximately 30° with respect to the sky

  6. Selection of Optimum Vocabulary and Dialog Strategy for Noise-Robust Spoken Dialog Systems

    NASA Astrophysics Data System (ADS)

    Ito, Akinori; Oba, Takanobu; Konashi, Takashi; Suzuki, Motoyuki; Makino, Shozo

    Speech recognition in a noisy environment is one of the hottest topics in the speech recognition research. Noise-tolerant acoustic models or noise reduction techniques are often used to improve recognition accuracy. In this paper, we propose a method to improve accuracy of spoken dialog system from a language model point of view. In the proposed method, the dialog system automatically changes its language model and dialog strategy according to the estimated recognition accuracy in a noisy environment in order to keep the performance of the system high. In a noise-free environment, the system accepts any utterance from a user. On the other hand, the system restricts its grammar and vocabulary in a noisy environment. To realize this strategy, we investigated a method to avoid the user's out-of-grammar utterances through an instruction given by the system to a user. Furthermore, we developed a method to estimate recognition accuracy from features extracted from noise signals. Finally, we realized a proposed dialog system according to these investigations.

  7. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants

    PubMed Central

    Yousef, Malik; Saçar Demirci, Müşerref Duygu; Khalifa, Waleed; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection. PMID:27190509

  8. Accuracy of initial codon selection by aminoacyl-tRNAs on the mRNA-programmed bacterial ribosome

    PubMed Central

    Zhang, Jingji; Ieong, Ka-Weng; Johansson, Magnus; Ehrenberg, Måns

    2015-01-01

    We used a cell-free system with pure Escherichia coli components to study initial codon selection of aminoacyl-tRNAs in ternary complex with elongation factor Tu and GTP on messenger RNA-programmed ribosomes. We took advantage of the universal rate-accuracy trade-off for all enzymatic selections to determine how the efficiency of initial codon readings decreased linearly toward zero as the accuracy of discrimination against near-cognate and wobble codon readings increased toward the maximal asymptote, the d value. We report data on the rate-accuracy variation for 7 cognate, 7 wobble, and 56 near-cognate codon readings comprising about 15% of the genetic code. Their d values varied about 400-fold in the 200–80,000 range depending on type of mismatch, mismatch position in the codon, and tRNA isoacceptor type. We identified error hot spots (d = 200) for U:G misreading in second and U:U or G:A misreading in third codon position by His-tRNAHis and, as also seen in vivo, Glu-tRNAGlu. We suggest that the proofreading mechanism has evolved to attenuate error hot spots in initial selection such as those found here. PMID:26195797

  9. Accuracy of initial codon selection by aminoacyl-tRNAs on the mRNA-programmed bacterial ribosome.

    PubMed

    Zhang, Jingji; Ieong, Ka-Weng; Johansson, Magnus; Ehrenberg, Måns

    2015-08-04

    We used a cell-free system with pure Escherichia coli components to study initial codon selection of aminoacyl-tRNAs in ternary complex with elongation factor Tu and GTP on messenger RNA-programmed ribosomes. We took advantage of the universal rate-accuracy trade-off for all enzymatic selections to determine how the efficiency of initial codon readings decreased linearly toward zero as the accuracy of discrimination against near-cognate and wobble codon readings increased toward the maximal asymptote, the d value. We report data on the rate-accuracy variation for 7 cognate, 7 wobble, and 56 near-cognate codon readings comprising about 15% of the genetic code. Their d values varied about 400-fold in the 200-80,000 range depending on type of mismatch, mismatch position in the codon, and tRNA isoacceptor type. We identified error hot spots (d = 200) for U:G misreading in second and U:U or G:A misreading in third codon position by His-tRNA(His) and, as also seen in vivo, Glu-tRNA(Glu). We suggest that the proofreading mechanism has evolved to attenuate error hot spots in initial selection such as those found here.

  10. Increased prediction accuracy in wheat breeding trials using a marker x environment interaction genomic selection model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates for selection. Originally these models were developed without considering genotype ' environment interaction (GE). Several authors have proposed extensions of the cannonical GS model that accomm...

  11. Genomic selection accuracy using multi-family prediction models in a wheat breeding program

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) uses genome-wide molecular marker data to predict the genetic value of selection candidates in breeding programs. In plant breeding, the ability to produce large numbers of progeny per cross allows GS to be conducted within each family. However, this approach requires phenotyp...

  12. Accuracy of genomic selection models in a large population of open-pollinated families in white spruce

    PubMed Central

    Beaulieu, J; Doerksen, T; Clément, S; MacKay, J; Bousquet, J

    2014-01-01

    Genomic selection (GS) is of interest in breeding because of its potential for predicting the genetic value of individuals and increasing genetic gains per unit of time. To date, very few studies have reported empirical results of GS potential in the context of large population sizes and long breeding cycles such as for boreal trees. In this study, we assessed the effectiveness of marker-aided selection in an undomesticated white spruce (Picea glauca (Moench) Voss) population of large effective size using a GS approach. A discovery population of 1694 trees representative of 214 open-pollinated families from 43 natural populations was phenotyped for 12 wood and growth traits and genotyped for 6385 single-nucleotide polymorphisms (SNPs) mined in 2660 gene sequences. GS models were built to predict estimated breeding values using all the available SNPs or SNP subsets of the largest absolute effects, and they were validated using various cross-validation schemes. The accuracy of genomic estimated breeding values (GEBVs) varied from 0.327 to 0.435 when the training and the validation data sets shared half-sibs that were on average 90% of the accuracies achieved through traditionally estimated breeding values. The trend was also the same for validation across sites. As expected, the accuracy of GEBVs obtained after cross-validation with individuals of unknown relatedness was lower with about half of the accuracy achieved when half-sibs were present. We showed that with the marker densities used in the current study, predictions with low to moderate accuracy could be obtained within a large undomesticated population of related individuals, potentially resulting in larger gains per unit of time with GS than with the traditional approach. PMID:24781808

  13. Genetic code translation displays a linear trade-off between efficiency and accuracy of tRNA selection

    PubMed Central

    Johansson, Magnus; Zhang, Jingji; Ehrenberg, Måns

    2012-01-01

    Rapid and accurate translation of the genetic code into protein is fundamental to life. Yet due to lack of a suitable assay, little is known about the accuracy-determining parameters and their correlation with translational speed. Here, we develop such an assay, based on Mg2+ concentration changes, to determine maximal accuracy limits for a complete set of single-mismatch codon–anticodon interactions. We found a simple, linear trade-off between efficiency of cognate codon reading and accuracy of tRNA selection. The maximal accuracy was highest for the second codon position and lowest for the third. The results rationalize the existence of proofreading in code reading and have implications for the understanding of tRNA modifications, as well as of translation error-modulating ribosomal mutations and antibiotics. Finally, the results bridge the gap between in vivo and in vitro translation and allow us to calibrate our test tube conditions to represent the environment inside the living cell. PMID:22190491

  14. A rapid and robust selection procedure for generating drug-selectable marker-free recombinant malaria parasites

    PubMed Central

    Manzoni, Giulia; Briquet, Sylvie; Risco-Castillo, Veronica; Gaultier, Charlotte; Topçu, Selma; Ivănescu, Maria Larisa; Franetich, Jean-François; Hoareau-Coudert, Bénédicte; Mazier, Dominique; Silvie, Olivier

    2014-01-01

    Experimental genetics have been widely used to explore the biology of the malaria parasites. The rodent parasites Plasmodium berghei and less frequently P. yoelii are commonly utilised, as their complete life cycle can be reproduced in the laboratory and because they are genetically tractable via homologous recombination. However, due to the limited number of drug-selectable markers, multiple modifications of the parasite genome are difficult to achieve and require large numbers of mice. Here we describe a novel strategy that combines positive-negative drug selection and flow cytometry-assisted sorting of fluorescent parasites for the rapid generation of drug-selectable marker-free P. berghei and P. yoelii mutant parasites expressing a GFP or a GFP-luciferase cassette, using minimal numbers of mice. We further illustrate how this new strategy facilitates phenotypic analysis of genetically modified parasites by fluorescence and bioluminescence imaging of P. berghei mutants arrested during liver stage development. PMID:24755823

  15. A rapid and robust selection procedure for generating drug-selectable marker-free recombinant malaria parasites.

    PubMed

    Manzoni, Giulia; Briquet, Sylvie; Risco-Castillo, Veronica; Gaultier, Charlotte; Topçu, Selma; Ivănescu, Maria Larisa; Franetich, Jean-François; Hoareau-Coudert, Bénédicte; Mazier, Dominique; Silvie, Olivier

    2014-04-23

    Experimental genetics have been widely used to explore the biology of the malaria parasites. The rodent parasites Plasmodium berghei and less frequently P. yoelii are commonly utilised, as their complete life cycle can be reproduced in the laboratory and because they are genetically tractable via homologous recombination. However, due to the limited number of drug-selectable markers, multiple modifications of the parasite genome are difficult to achieve and require large numbers of mice. Here we describe a novel strategy that combines positive-negative drug selection and flow cytometry-assisted sorting of fluorescent parasites for the rapid generation of drug-selectable marker-free P. berghei and P. yoelii mutant parasites expressing a GFP or a GFP-luciferase cassette, using minimal numbers of mice. We further illustrate how this new strategy facilitates phenotypic analysis of genetically modified parasites by fluorescence and bioluminescence imaging of P. berghei mutants arrested during liver stage development.

  16. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genome-wide molecular markers are readily being applied to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorp...

  17. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Sequence-based genotyping methods can generate de novo large numbers of markers. However, without a reference genome, these markers are unordered and typically have a lar...

  18. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...

  19. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  20. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  1. Selective CO2 adsorption in a robust and water-stable porous coordination polymer with new network topology.

    PubMed

    Nagarkar, Sanjog S; Chaudhari, Abhijeet K; Ghosh, Sujit K

    2012-01-02

    A robust and water-stable porous coordination polymer [Cd(NDC)(0.5)(PCA)]·G(x) (1) (H(2)NDC = 2,6-napthalenedicarboxylic acid, HPCA = 4-pyridinecarboxylic acid, G = guest molecules) with new network topology has been synthesized solvothermally. The framework is 3D porous material and forms a 1D channel along the c-axis, with the channel dimensions ~9.48 × 7.83 Å(2). The compound has high selectivity in uptake of CO(2) over other gases (H(2), O(2), Ar, N(2), and CH(4)). The framework is highly stable in presence of water vapor even at 60 °C. The high CO(2) selectivity over other gases and water stability makes the compound promising candidate for industrial postcombustion gas separation application.

  2. Classification accuracy analysis of selected land use and land cover products in a portion of West-Central Lower Michigan

    NASA Astrophysics Data System (ADS)

    Ma, Kin Man

    2007-12-01

    Remote sensing satellites have been utilized to characterize and map land cover and its changes since the 1970s. However, uncertainties exist in almost all land use and land cover maps classified from remotely sensed images. In particular, it has been recognized that the spatial mis-registration of land cover maps can affect the true estimates of land use/land cover (LULC) changes. This dissertation addressed the following questions: what are the spatial patterns, magnitudes, and cover-dependencies of classification uncertainty associated with West-Central Lower Michigan's LULC products and how can the adverse effects of spatial misregistration on accuracy assessment be reduced? Two Michigan LULC products were chosen for comparison: 1998 Muskegon River Watershed (MRW) Michigan Resource Information Systems LULC map and a 2001 Integrated Forest Monitoring and Assessment Prescription Project (IFMAP). The 1m resolution 1998 MRW LULC map was derived from U.S. Geological Survey Digital Orthophoto Quarter Quadrangle (USGS DOQQs) color infrared imagery and was used as the reference map, since it has a thematic accuracy of 95%. The IFMAP LULC map was co-registered to a series of selected 1998 USGS DOQQs. The total combined root mean square error (rmse) distance of the georectified 2001 IFMAP was +/-12.20m. A spatial uncertainty buffer of at least 1.5 times the rmse was set at 20m so that polygon core areas would be unaffected by spatial misregistration noise. A new spatial misregistration buffer protocol (SPATIALM_ BUFFER) was developed to limit the effect of spatial misregistration on classification accuracy assessment. Spatial uncertainty buffer zones of 20m were generated around LULC polygons of both datasets. Eight-hundred seventeen (817) stratified random accuracy assessment points (AAPs) were generated across the 1998 MRW map. Classification accuracy and kappa statistics were generated for both the 817 AAPs and 604 AAPs comparisons. For the 817 AAPs comparison, the

  3. Selective Adsorption of Sulfur Dioxide in a Robust Metal-Organic Framework Material.

    PubMed

    Savage, Mathew; Cheng, Yongqiang; Easun, Timothy L; Eyley, Jennifer E; Argent, Stephen P; Warren, Mark R; Lewis, William; Murray, Claire; Tang, Chiu C; Frogley, Mark D; Cinque, Gianfelice; Sun, Junliang; Rudić, Svemir; Murden, Richard T; Benham, Michael J; Fitch, Andrew N; Blake, Alexander J; Ramirez-Cuesta, Anibal J; Yang, Sihai; Schröder, Martin

    2016-10-01

    Selective adsorption of SO2 is realized in a porous metal-organic framework material, and in-depth structural and spectroscopic investigations using X-rays, infrared, and neutrons define the underlying interactions that cause SO2 to bind more strongly than CO2 and N2 .

  4. The linear interplay of intrinsic and extrinsic noises ensures a high accuracy of cell fate selection in budding yeast

    PubMed Central

    Li, Yongkai; Yi, Ming; Zou, Xiufen

    2014-01-01

    To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292

  5. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  6. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  7. Robust Depth Estimation and Image Fusion Based on Optimal Area Selection

    PubMed Central

    Lee, Ik-Hyun; Mahmood, Muhammad Tariq; Choi, Tae-Sun

    2013-01-01

    Mostly, 3D cameras having depth sensing capabilities employ active depth estimation techniques, such as stereo, the triangulation method or time-of-flight. However, these methods are expensive. The cost can be reduced by applying optical passive methods, as they are inexpensive and efficient. In this paper, we suggest the use of one of the passive optical methods named shape from focus (SFF) for 3D cameras. In the proposed scheme, first, an adaptive window is computed through an iterative process using a criterion. Then, the window is divided into four regions. In the next step, the best focused area among the four regions is selected based on variation in the data. The effectiveness of the proposed scheme is validated using image sequences of synthetic and real objects. Comparative analysis based on statistical metrics correlation, mean square error (MSE), universal image quality index (UIQI) and structural similarity (SSIM) shows the effectiveness of the proposed scheme. PMID:24008281

  8. Facilitating the selection and creation of accurate interatomic potentials with robust tools and characterization

    NASA Astrophysics Data System (ADS)

    Trautt, Zachary T.; Tavazza, Francesca; Becker, Chandler A.

    2015-10-01

    The Materials Genome Initiative seeks to significantly decrease the cost and time of development and integration of new materials. Within the domain of atomistic simulations, several roadblocks stand in the way of reaching this goal. While the NIST Interatomic Potentials Repository hosts numerous interatomic potentials (force fields), researchers cannot immediately determine the best choice(s) for their use case. Researchers developing new potentials, specifically those in restricted environments, lack a comprehensive portfolio of efficient tools capable of calculating and archiving the properties of their potentials. This paper elucidates one solution to these problems, which uses Python-based scripts that are suitable for rapid property evaluation and human knowledge transfer. Calculation results are visible on the repository website, which reduces the time required to select an interatomic potential for a specific use case. Furthermore, property evaluation scripts are being integrated with modern platforms to improve discoverability and access of materials property data. To demonstrate these scripts and features, we will discuss the automation of stacking fault energy calculations and their application to additional elements. While the calculation methodology was developed previously, we are using it here as a case study in simulation automation and property calculations. We demonstrate how the use of Python scripts allows for rapid calculation in a more easily managed way where the calculations can be modified, and the results presented in user-friendly and concise ways. Additionally, the methods can be incorporated into other efforts, such as openKIM.

  9. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  10. The Role of Some Selected Psychological and Personality Traits of the Rater in the Accuracy of Self- and Peer-Assessment

    ERIC Educational Resources Information Center

    AlFallay, Ibrahim

    2004-01-01

    This paper investigates the role of some selected psychological and personality traits of learners of English as a foreign language in the accuracy of self- and peer-assessments. The selected traits were motivation types, self-esteem, anxiety, motivational intensity, and achievement. 78 students of English as a foreign language participated in…

  11. Variable selection and specification of robust QSAR models from multicollinear data: arylpiperazinyl derivatives with affinity and selectivity for alpha2-adrenoceptors.

    PubMed

    Salt, D W; Maccari, L; Botta, M; Ford, M G

    2004-01-01

    Two QSAR models have been identified that predict the affinity and selectivity of arylpiperazinyl derivatives for alpha1 and alpha2 adrenoceptors (ARs). The models have been specified and validated using 108 compounds whose structures and inhibition constants (Ki) are available in the literature [Barbaro et al., J. Med. Chem., 44 (2001) 2118; Betti et al., J. Med. Chem., 45 (2002) 3603; Barbaro et al., Bioorg. Med. Chem., 10 (2002) 361; Betti et al., J. Med. Chem., 46 (2003) 3555]. One hundred and forty-seven predictors have been calculated using the Cerius 2 software available from Accelrys. This set of variables exhibited redundancy and severe multicollinearity, which had to be identified and removed as appropriate in order to obtain robust regression models free of inflated errors for the beta estimates - so-called bouncing betas. Those predictors that contained information relevant to the alpha2 response were identified on the basis of their pairwise linear correlations with affinity (-log Ki) for alpha2 adrenoceptors; the remaining variables were discarded. Subsequent variable selection made use of Factor Analysis (FA) and Unsupervised Variable Selection (UzFS). The data was divided into test and training sets using cluster analysis. These two sets were characterised by similar and consistent distributions of compounds in a high dimensional, but relevant predictor space. Multiple regression was then used to determine a subset of predictors from which to determine QSAR models for affinity to alpha2-ARs. Two multivariate procedures, Continuum Regression (the Portsmouth formulation) and Canonical Correlation Analysis (CCA), have been used to specify models for affinity and selectivity, respectively. Reasonable predictions were obtained using these in silico screening tools.

  12. Variable selection and specification of robust QSAR models from multicollinear data: arylpiperazinyl derivatives with affinity and selectivity for α2-adrenoceptors

    NASA Astrophysics Data System (ADS)

    Salt, D. W.; Maccari, L.; Botta, M.; Ford, M. G.

    2004-07-01

    Two QSAR models have been identified that predict the affinity and selectivity of arylpiperazinyl derivatives for α1 and α2 adrenoceptors (ARs). The models have been specified and validated using 108 compounds whose structures and inhibition constants ( K i) are available in the literature [Barbaro et al., J. Med. Chem., 44 (2001) 2118; Betti et al., J. Med. Chem., 45 (2002) 3603; Barbaro et al., Bioorg. Med. Chem., 10 (2002) 361; Betti et al., J. Med. Chem., 46 (2003) 3555]. One hundred and forty-seven predictors have been calculated using the Cerius 2 software available from Accelrys. This set of variables exhibited redundancy and severe multicollinearity, which had to be identified and removed as appropriate in order to obtain robust regression models free of inflated errors for the β estimates - so-called bouncing βs. Those predictors that contained information relevant to the α2 response were identified on the basis of their pairwise linear correlations with affinity (-log K i) for α2 adrenoceptors; the remaining variables were discarded. Subsequent variable selection made use of Factor Analysis (FA) and Unsupervised Variable Selection (UzFS). The data was divided into test and training sets using cluster analysis. These two sets were characterised by similar and consistent distributions of compounds in a high dimensional, but relevant predictor space. Multiple regression was then used to determine a subset of predictors from which to determine QSAR models for affinity to α2-ARs. Two multivariate procedures, Continuum Regression (the Portsmouth formulation) and Canonical Correlation Analysis (CCA), have been used to specify models for affinity and selectivity, respectively. Reasonable predictions were obtained using these in silico screening tools.

  13. Mitigating arsenic crisis in the developing world: role of robust, reusable and selective hybrid anion exchanger (HAIX).

    PubMed

    German, Michael; Seingheng, Hul; SenGupta, Arup K

    2014-08-01

    In trying to address the public health crisis from the lack of potable water, millions of tube wells have been installed across the world. From these tube wells, natural groundwater contamination from arsenic regularly puts at risk the health of over 100 million people in South and Southeast Asia. Although there have been many research projects, awards and publications, appropriate treatment technology has not been matched to ground level realities and water solutions have not scaled to reach millions of people. For thousands of people from Nepal to India to Cambodia, hybrid anion exchange (HAIX) resins have provided arsenic-safe water for up to nine years. Synthesis of HAIX resins has been commercialized and they are now available globally. Robust, reusable and arsenic-selective, HAIX has been in operation in rural communities over numerous cycles of exhaustion-regeneration. All necessary testing and system maintenance is organized by community-level water staff. Removed arsenic is safely stored in a scientifically and environmentally appropriate manner to prevent future hazards to animals or people. Recent installations have shown the profitability of HAIX-based arsenic treatment, with capital payback periods of only two years in ideal locations. With an appropriate implementation model, HAIX-based treatment can rapidly scale and provide arsenic-safe water to at-risk populations.

  14. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  15. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  16. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling.

    PubMed

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z; Shamma, Shihab A; Babadi, Behtash

    2016-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.

  17. Robust prediction of B-factor profile from sequence using two-stage SVR based on random forest feature selection.

    PubMed

    Pan, Xiao-Yong; Shen, Hong-Bin

    2009-01-01

    B-factor is highly correlated with protein internal motion, which is used to measure the uncertainty in the position of an atom within a crystal structure. Although the rapid progress of structural biology in recent years makes more accurate protein structures available than ever, with the avalanche of new protein sequences emerging during the post-genomic Era, the gap between the known protein sequences and the known protein structures becomes wider and wider. It is urgent to develop automated methods to predict B-factor profile from the amino acid sequences directly, so as to be able to timely utilize them for basic research. In this article, we propose a novel approach, called PredBF, to predict the real value of B-factor. We firstly extract both global and local features from the protein sequences as well as their evolution information, then the random forests feature selection is applied to rank their importance and the most important features are inputted to a two-stage support vector regression (SVR) for prediction, where the initial predicted outputs from the 1(st) SVR are further inputted to the 2nd layer SVR for final refinement. Our results have revealed that a systematic analysis of the importance of different features makes us have deep insights into the different contributions of features and is very necessary for developing effective B-factor prediction tools. The two-layer SVR prediction model designed in this study further enhanced the robustness of predicting the B-factor profile. As a web server, PredBF is freely available at: http://www.csbio.sjtu.edu.cn/bioinf/PredBF for academic use.

  18. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.

  19. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  20. A Chemically Synthesized Capture Agent Enables the Selective, Sensitive, and Robust Electrochemical Detection of Anthrax Protective Antigen

    DTIC Science & Technology

    2014-08-01

    report on a robust and sensitive approach for detecting protective antigen (PA) exotoxin from Bacillus anthracis in complex media. A peptide-based...contributed equally to this work. T here exists an unmet need for rapid, sensitive, and field stable assays for pathogen detection. Bacillus anthra- cis is...exotoxin from Bacillus anthracis in complex media. A peptide-based capture agent against PA was developed by improving a bacteria display-developed

  1. An Analysis of the Selected Materials Used in Step Measurements During Pre-Fits of Thermal Protection System Tiles and the Accuracy of Measurements Made Using These Selected Materials

    NASA Technical Reports Server (NTRS)

    Kranz, David William

    2010-01-01

    The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.

  2. Genome-enabled selection doubles the accuracy of predicted breeding values for bacterial cold water disease resistance compared to traditional family-based selection in rainbow trout aquaculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We have shown previously that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  3. Robust variable selection method for nonparametric differential equation models with application to nonlinear dynamic gene regulatory network analysis.

    PubMed

    Lu, Tao

    2016-01-01

    The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.

  4. Robust estimates of divergence times and selection with a poisson random field model: a case study of comparative phylogeographic data.

    PubMed

    Amei, Amei; Smith, Brian Tilston

    2014-01-01

    Mutation frequencies can be modeled as a Poisson random field (PRF) to estimate speciation times and the degree of selection on newly arisen mutations. This approach provides a quantitative theory for comparing intraspecific polymorphism with interspecific divergence in the presence of selection and can be used to estimate population genetic parameters. Although the original PRF model has been extended to more general biological settings to make statistical inference about selection and divergence among model organisms, it has not been incorporated into phylogeographic studies that focus on estimating population genetic parameters for nonmodel organisms. Here, we modified a recently developed time-dependent PRF model to independently estimate genetic parameters from a nuclear and mitochondrial DNA data set of 22 sister pairs of birds that have diverged across a biogeographic barrier. We found that species that inhabit humid habitats had more recent divergence times and larger effective population sizes than those that inhabit drier habitats, and divergence time estimated from the PRF model were similar to estimates from a coalescent species-tree approach. Selection coefficients were higher in sister pairs that inhabited drier habitats than in those in humid habitats, but overall the mitochondrial DNA was under weak selection. Our study indicates that PRF models are useful for estimating various population genetic parameters and serve as a framework for incorporating estimates of selection into comparative phylogeographic studies.

  5. Accuracy of genomic prediction for BCWD resistance in rainbow trout using different genotyping platforms and genomic selection models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, we aimed to (1) predict genomic estimated breeding value (GEBV) for bacterial cold water disease (BCWD) resistance by genotyping training (n=583) and validation samples (n=53) with two genotyping platforms (24K RAD-SNP and 49K SNP) and using different genomic selection (GS) models (Ba...

  6. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures. In this paper, a systematic approach towards establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters, but at different locations in a powder bed and in different laser scanning directions. The difference in melt track widths and depths captures the effect of changes in incident beam power distribution due to location and processing direction. The experimental results are used in combination with numerical model, and subjected to uncertainty and reliability analysis. Cumulative probability distribution functions obtained for melt track widths and depths are found to be coherent with observed experimental values. The technique is subsequently extended for reliability characterization of single layers produced on a thick powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity.

  7. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  8. Accuracy and Usefulness of Select Methods for Assessing Complete Collection of 24-Hour Urine: A Systematic Review.

    PubMed

    John, Katherine A; Cogswell, Mary E; Campbell, Norm R; Nowson, Caryl A; Legetic, Branka; Hennis, Anselm J M; Patel, Sheena M

    2016-05-01

    Twenty-four-hour urine collection is the recommended method for estimating sodium intake. To investigate the strengths and limitations of methods used to assess completion of 24-hour urine collection, the authors systematically reviewed the literature on the accuracy and usefulness of methods vs para-aminobenzoic acid (PABA) recovery (referent). The percentage of incomplete collections, based on PABA, was 6% to 47% (n=8 studies). The sensitivity and specificity for identifying incomplete collection using creatinine criteria (n=4 studies) was 6% to 63% and 57% to 99.7%, respectively. The most sensitive method for removing incomplete collections was a creatinine index <0.7. In pooled analysis (≥2 studies), mean urine creatinine excretion and volume were higher among participants with complete collection (P<.05); whereas, self-reported collection time did not differ by completion status. Compared with participants with incomplete collection, mean 24-hour sodium excretion was 19.6 mmol higher (n=1781 specimens, 5 studies) in patients with complete collection. Sodium excretion may be underestimated by inclusion of incomplete 24-hour urine collections. None of the current approaches reliably assess completion of 24-hour urine collection.

  9. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain.

    PubMed

    Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan

    2015-01-16

    We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain.

  10. Prospects of Genomic Prediction in the USDA Soybean Germplasm Collection: Historical Data Creates Robust Models for Enhancing Selection of Accessions

    PubMed Central

    Jarquin, Diego; Specht, James; Lorenz, Aaron

    2016-01-01

    The identification and mobilization of useful genetic variation from germplasm banks for use in breeding programs is critical for future genetic gain and protection against crop pests. Plummeting costs of next-generation sequencing and genotyping is revolutionizing the way in which researchers and breeders interface with plant germplasm collections. An example of this is the high density genotyping of the entire USDA Soybean Germplasm Collection. We assessed the usefulness of 50K single nucleotide polymorphism data collected on 18,480 domesticated soybean (Glycine max) accessions and vast historical phenotypic data for developing genomic prediction models for protein, oil, and yield. Resulting genomic prediction models explained an appreciable amount of the variation in accession performance in independent validation trials, with correlations between predicted and observed reaching up to 0.92 for oil and protein and 0.79 for yield. The optimization of training set design was explored using a series of cross-validation schemes. It was found that the target population and environment need to be well represented in the training set. Second, genomic prediction training sets appear to be robust to the presence of data from diverse geographical locations and genetic clusters. This finding, however, depends on the influence of shattering and lodging, and may be specific to soybean with its presence of maturity groups. The distribution of 7608 nonphenotyped accessions was examined through the application of genomic prediction models. The distribution of predictions of phenotyped accessions was representative of the distribution of predictions for nonphenotyped accessions, with no nonphenotyped accessions being predicted to fall far outside the range of predictions of phenotyped accessions. PMID:27247288

  11. The Impact of Learning Curve Model Selection and Criteria for Cost Estimation Accuracy in the DoD

    DTIC Science & Technology

    2016-04-30

    Department of the Army. Department of the Air Force. (2007). Air Force cost estimating handbook. Washington, DC: Author. Everest, J. D. (1988). Measuring ...qÜáêíÉÉåíÜ=^ååì~ä= ^Åèìáëáíáçå=oÉëÉ~êÅÜ= póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = The Impact of Learning Curve Model Selection and Criteria for Cost ...póåÉêÖó=Ñçê=fåÑçêãÉÇ=`Ü~åÖÉ= - 453 - Panel 21. Methods for Improving Cost Estimates for Defense Acquisition Projects Thursday, May 5, 2016 3:30 p.m

  12. Identification of selective inhibitors of RET and comparison with current clinical candidates through development and validation of a robust screening cascade

    PubMed Central

    Watson, Amanda J.; Hopkins, Gemma V.; Hitchin, Samantha; Begum, Habiba; Jones, Stuart; Jordan, Allan; Holt, Sarah; March, H. Nikki; Newton, Rebecca; Small, Helen; Stowell, Alex; Waddell, Ian D.; Waszkowycz, Bohdan; Ogilvie, Donald J.

    2016-01-01

    RET (REarranged during Transfection) is a receptor tyrosine kinase, which plays pivotal roles in regulating cell survival, differentiation, proliferation, migration and chemotaxis. Activation of RET is a mechanism of oncogenesis in medullary thyroid carcinomas where both germline and sporadic activating somatic mutations are prevalent. At present, there are no known specific RET inhibitors in clinical development, although many potent inhibitors of RET have been opportunistically identified through selectivity profiling of compounds initially designed to target other tyrosine kinases. Vandetanib and cabozantinib, both multi-kinase inhibitors with RET activity, are approved for use in medullary thyroid carcinoma, but additional pharmacological activities, most notably inhibition of vascular endothelial growth factor - VEGFR2 (KDR), lead to dose-limiting toxicity. The recent identification of RET fusions present in ~1% of lung adenocarcinoma patients has renewed interest in the identification and development of more selective RET inhibitors lacking the toxicities associated with the current treatments. In an earlier publication [Newton et al, 2016; 1] we reported the discovery of a series of 2-substituted phenol quinazolines as potent and selective RET kinase inhibitors. Here we describe the development of the robust screening cascade which allowed the identification and advancement of this chemical series.  Furthermore we have profiled a panel of RET-active clinical compounds both to validate the cascade and to confirm that none display a RET-selective target profile. PMID:27429741

  13. A Robust Highly Interpenetrated Metal−Organic Framework Constructed from Pentanuclear Clusters for Selective Sorption of Gas Molecules

    SciTech Connect

    Zhang, Zhangjing; Xiang, Shengchang; Chen, Yu-Sheng; Ma, Shengqian; Lee, Yongwoo; Phely-Bobin, Thomas; Chen, Banglin

    2010-10-22

    A three-dimensional microporous metal-organic framework, Zn{sub 5}(BTA){sub 6}(TDA){sub 2} {center_dot} 15DMF {center_dot} 8H{sub 2}O (1; HBTA = 1,2,3-benzenetriazole; H{sub 2}TDA = thiophene-2,5-dicarboxylic acid), comprising pentanuclear [Zn{sub 5}] cluster units, was obtained through an one-pot solvothermal reaction of Zn(NO{sub 3}){sub 2}, 1,2,3-benzenetriazole, and thiophene-2,5-dicarboxylate. The activated 1 displays type-I N{sub 2} gas sorption behavior with a Langmuir surface area of 607 m{sup 2} g{sup -1} and exhibits interesting selective gas adsorption for C{sub 2}H{sub 2}/CH{sub 4} and CO{sub 2}/CH{sub 4}.

  14. Robustness. [in space systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    1993-01-01

    The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.

  15. Compact and phase-error-robust multilayered AWG-based wavelength selective switch driven by a single LCOS.

    PubMed

    Sorimoto, Keisuke; Tanizawa, Ken; Uetsuka, Hisato; Kawashima, Hitoshi; Mori, Masahiko; Hasama, Toshifumi; Ishikawa, Hiroshi; Tsuda, Hiroyuki

    2013-07-15

    A novel liquid crystal on silicon (LCOS)-based wavelength selective switch (WSS) is proposed, fabricated, and demonstrated. It employs a multilayered arrayed waveguide grating (AWG) as a wavelength multiplex/demultiplexer. The LCOS deflects spectrally decomposed beams channel by channel and switches them to desired waveguide layers of the multilayered AWG. In order to obtain the multilayered AWG with high yield, phase errors of the AWG is externally compensated for by an additional phase modulation with the LCOS. This additional phase modulation is applied to the equivalent image of the facet of the AWG, which is projected by a relay lens. In our previously-reported WSS configuration, somewhat large footprint and increased cost were the drawbacks, since two LCOSs were required: one LCOS was driven for the inter-port switching operation, and the other was for the phase-error compensation. In the newly proposed configuration, on the other hand, both switching and compensation operations are performed using a single LCOS. This reduction of the component count is realized by introducing the folded configuration with a reflector. The volume of the WSS optics is 80 × 100 × 60 mm3, which is approximately 40% smaller than the previous configuration. The polarization-dependent loss and inter-channel crosstalk are less than 1.5 dB and -21.0 dB, respectively. An error-free transmission of 40-Gbit/s NRZ-OOK signal through the WSS is successfully demonstrated.

  16. Influence of Raw Image Preprocessing and Other Selected Processes on Accuracy of Close-Range Photogrammetric Systems According to Vdi 2634

    NASA Astrophysics Data System (ADS)

    Reznicek, J.; Luhmann, T.; Jepping, C.

    2016-06-01

    This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements) by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars) with known lengths (typical uncertainty = 5 μm at 2 sigma). The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  17. Selective logging in tropical forests decreases the robustness of liana–tree interaction networks to the loss of host tree species

    PubMed Central

    Magrach, Ainhoa; Senior, Rebecca A.; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F.; Santamaria, Luis; Edwards, David P.

    2016-01-01

    Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the ‘health’ and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana–tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of ‘extinction debt’. PMID:26936241

  18. Selective logging in tropical forests decreases the robustness of liana-tree interaction networks to the loss of host tree species.

    PubMed

    Magrach, Ainhoa; Senior, Rebecca A; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F; Santamaria, Luis; Edwards, David P

    2016-03-16

    Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the 'health' and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana-tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of 'extinction debt'.

  19. Transcriptomic Characterization of Innate and Acquired Immune Responses in Red-Legged Partridges (Alectoris rufa): A Resource for Immunoecology and Robustness Selection.

    PubMed

    Sevane, Natalia; Cañon, Javier; Gil, Ignacio; Dunner, Susana

    2015-01-01

    Present and future challenges for wild partridge populations include the resistance against possible disease transmission after restocking with captive-reared individuals, and the need to cope with the stress prompted by new dynamic and challenging scenarios. Selection of individuals with the best immune ability may be a good strategy to improve general immunity, and hence adaptation to stress. In this study, non-infectious challenges with phytohemagglutinin (PHA) and sheep red blood cells allowed the classification of red-legged partridges (Alectoris rufa) according to their overall immune responses (IR). Skin from the area of injection of PHA and spleen, both from animals showing extreme high and low IR, were selected to investigate the transcriptional profiles underlying the different ability to cope with pathogens and external aggressions. RNA-seq yielded 97 million raw reads from eight sequencing libraries and approximately 84% of the processed reads were mapped to the reference chicken genome. Differential expression analysis identified 1488 up- and 107 down-regulated loci in individuals with high IR versus low IR. Partridges displaying higher innate IR show an enhanced activation of host defence gene pathways complemented with a tightly controlled desensitization that facilitates the return to cellular homeostasis. These findings indicate that the immune system ability to respond to aggressions (either diseases or stress produced by environmental changes) involves extensive transcriptional and post-transcriptional regulations, and expand our understanding on the molecular mechanisms of the avian immune system, opening the possibility of improving disease resistance or robustness using genome assisted selection (GAS) approaches for increased IR in partridges by using genes such as AVN or BF2 as markers. This study provides the first transcriptome sequencing data of the Alectoris genus, a resource for molecular ecology that enables integration of genomic tools

  20. Transcriptomic Characterization of Innate and Acquired Immune Responses in Red-Legged Partridges (Alectoris rufa): A Resource for Immunoecology and Robustness Selection

    PubMed Central

    Sevane, Natalia; Cañon, Javier; Gil, Ignacio; Dunner, Susana

    2015-01-01

    Present and future challenges for wild partridge populations include the resistance against possible disease transmission after restocking with captive-reared individuals, and the need to cope with the stress prompted by new dynamic and challenging scenarios. Selection of individuals with the best immune ability may be a good strategy to improve general immunity, and hence adaptation to stress. In this study, non-infectious challenges with phytohemagglutinin (PHA) and sheep red blood cells allowed the classification of red-legged partridges (Alectoris rufa) according to their overall immune responses (IR). Skin from the area of injection of PHA and spleen, both from animals showing extreme high and low IR, were selected to investigate the transcriptional profiles underlying the different ability to cope with pathogens and external aggressions. RNA-seq yielded 97 million raw reads from eight sequencing libraries and approximately 84% of the processed reads were mapped to the reference chicken genome. Differential expression analysis identified 1488 up- and 107 down-regulated loci in individuals with high IR versus low IR. Partridges displaying higher innate IR show an enhanced activation of host defence gene pathways complemented with a tightly controlled desensitization that facilitates the return to cellular homeostasis. These findings indicate that the immune system ability to respond to aggressions (either diseases or stress produced by environmental changes) involves extensive transcriptional and post-transcriptional regulations, and expand our understanding on the molecular mechanisms of the avian immune system, opening the possibility of improving disease resistance or robustness using genome assisted selection (GAS) approaches for increased IR in partridges by using genes such as AVN or BF2 as markers. This study provides the first transcriptome sequencing data of the Alectoris genus, a resource for molecular ecology that enables integration of genomic tools

  1. Calibration sets selection strategy for the construction of robust PLS models for prediction of biodiesel/diesel blends physico-chemical properties using NIR spectroscopy.

    PubMed

    Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel

    2017-06-05

    Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.

  2. Identification of GDC-0810 (ARN-810), an Orally Bioavailable Selective Estrogen Receptor Degrader (SERD) that Demonstrates Robust Activity in Tamoxifen-Resistant Breast Cancer Xenografts.

    PubMed

    Lai, Andiliy; Kahraman, Mehmet; Govek, Steven; Nagasawa, Johnny; Bonnefous, Celine; Julien, Jackie; Douglas, Karensa; Sensintaffar, John; Lu, Nhin; Lee, Kyoung-Jin; Aparicio, Anna; Kaufman, Josh; Qian, Jing; Shao, Gang; Prudente, Rene; Moon, Michael J; Joseph, James D; Darimont, Beatrice; Brigham, Daniel; Grillot, Kate; Heyman, Richard; Rix, Peter J; Hager, Jeffrey H; Smith, Nicholas D

    2015-06-25

    Approximately 80% of breast cancers are estrogen receptor alpha (ER-α) positive, and although women typically initially respond well to antihormonal therapies such as tamoxifen and aromatase inhibitors, resistance often emerges. Although a variety of resistance mechanism may be at play in this state, there is evidence that in many cases the ER still plays a central role, including mutations in the ER leading to constitutively active receptor. Fulvestrant is a steroid-based, selective estrogen receptor degrader (SERD) that both antagonizes and degrades ER-α and is active in patients who have progressed on antihormonal agents. However, fulvestrant suffers from poor pharmaceutical properties and must be administered by intramuscular injections that limit the total amount of drug that can be administered and hence lead to the potential for incomplete receptor blockade. We describe the identification and characterization of a series of small-molecule, orally bioavailable SERDs which are potent antagonists and degraders of ER-α and in which the ER-α degrading properties were prospectively optimized. The lead compound 11l (GDC-0810 or ARN-810) demonstrates robust activity in models of tamoxifen-sensitive and tamoxifen-resistant breast cancer, and is currently in clinical trials in women with locally advanced or metastatic estrogen receptor-positive breast cancer.

  3. Robust MOFs of 'tsg' Topology Based on Trigonal Prismatic Organic and Metal Cluster SBUs: Single Crystal-to-Single Crystal Postsynthetic Metal Exchange and Selective CO2 Capture.

    PubMed

    Moorthy, Narasimha Jarugu; Chandrasekhar, Pujari; Savitha, Govardhan

    2017-04-03

    The self-assembly of a rigid and trigonal prismatic triptycene-hexaacid H6THA with Co(NO3)2 and Mn(NO3)2 leads to isostructural MOFs that are sustained by 6-c metal cluster [M3(μ3-O)(COO)6] SBUs. The Co- and Mn-MOFs, constructed from organic and metal-cluster building blocks that are both trigonal prismatic, correspond to the heretofore unknown 'tsg' topology. Due to the rigidity and concave attributes of H6THA, the networks in Co- and Mn-MOFs are highly porous and undergo 3-fold interpenetration. The interpenetration imparts permanent microporosity and high thermal stability to the MOFs to permit postsynthetic metal exchange (PSME) and gas sorption. The PSME occurs in a SC-SC fashion when the crystals of Co- and Mn-MOFs are immersed in a solution of Cu(NO3)2 in MeOH/H2O. Further, the isostructural robust MOFs exhibit significant gas sorption and remarkable selectivity for CO2 over N2 (ca. 100 fold) at ambient conditions. In fact, the postsynthetically-engineered Cu-THA exhibits better CO2 sorption than Co-THA and Mn-THA. A composite of effects that include pore dimensions (ca. 0.7 nm), unsaturated metal centers and basic environments conferred by the quinoxaline nitrogen atoms appears to be responsible for the observed high CO2 capture and selectivity. The high-symmetry and structural attributes of the organic linker seemingly dictate adoption of the trigonal-prismatic metal cluster SBU by the metal ions in the MOFs.

  4. Short communication: The combined use of linkage disequilibrium-based haploblocks and allele frequency-based haplotype selection methods enhances genomic evaluation accuracy in dairy cattle.

    PubMed

    Jónás, Dávid; Ducrocq, Vincent; Croiseau, Pascal

    2017-04-01

    The construction and use of haploblocks [adjacent single nucleotide polymorphisms (SNP) in strong linkage disequilibrium] for genomic evaluation is advantageous, because the number of effects to be estimated can be reduced without discarding relevant genomic information. Furthermore, haplotypes (the combination of 2 or more SNP) can increase the probability of capturing the quantitative trait loci effect compared with individual SNP markers. With regards to haplotypes, the allele frequency parameter is also of interest, because as a selection criterion, it allows the number of rare alleles to be reduced, and the effects of those alleles are usually difficult to estimate. We have proposed a simple pipeline that simultaneously incorporates linkage disequilibrium and allele frequency information in genomic evaluation, and here we present the first results obtained with this procedure. We used a population of 2,235 progeny-tested bulls from the Montbéliarde breed for the tests. Phenotype data were available in the form of daughter yield deviations on 5 production traits, and genotype data were available from the 50K SNP chip. We conducted a classical validation study by splitting the population into training (80% oldest animals) and validation (20% youngest animals) sets to emulate a real-life scenario in which the selection candidates had no available phenotype data. We measured all reported parameters for the validation set. Our results proved that the proposed method was indeed advantageous, and that the accuracy of genomic evaluation could be improved. Compared with results from a genomic BLUP analysis, correlations between daughter yield deviations (a proxy for true) and genomic estimated breeding values increased by an average of 2.7 percentage points for the 5 traits. Inflation of the genomic evaluation of the selection candidates was also significantly reduced. The proposed method outperformed the other SNP and haplotype-based tests we had evaluated in a

  5. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  6. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.

  7. Selective Whole-Genome Amplification Is a Robust Method That Enables Scalable Whole-Genome Sequencing of Plasmodium vivax from Unprocessed Clinical Samples.

    PubMed

    Cowell, Annie N; Loy, Dorothy E; Sundararaman, Sesh A; Valdivia, Hugo; Fisch, Kathleen; Lescano, Andres G; Baldeviano, G Christian; Durand, Salomon; Gerbasi, Vince; Sutherland, Colin J; Nolder, Debbie; Vinetz, Joseph M; Hahn, Beatrice H; Winzeler, Elizabeth A

    2017-02-07

    Whole-genome sequencing (WGS) of microbial pathogens from clinical samples is a highly sensitive tool used to gain a deeper understanding of the biology, epidemiology, and drug resistance mechanisms of many infections. However, WGS of organisms which exhibit low densities in their hosts is challenging due to high levels of host genomic DNA (gDNA), which leads to very low coverage of the microbial genome. WGS of Plasmodium vivax, the most widely distributed form of malaria, is especially difficult because of low parasite densities and the lack of an ex vivo culture system. Current techniques used to enrich P. vivax DNA from clinical samples require significant resources or are not consistently effective. Here, we demonstrate that selective whole-genome amplification (SWGA) can enrich P. vivax gDNA from unprocessed human blood samples and dried blood spots for high-quality WGS, allowing genetic characterization of isolates that would otherwise have been prohibitively expensive or impossible to sequence. We achieved an average genome coverage of 24×, with up to 95% of the P. vivax core genome covered by ≥5 reads. The single-nucleotide polymorphism (SNP) characteristics and drug resistance mutations seen were consistent with those of other P. vivax sequences from a similar region in Peru, demonstrating that SWGA produces high-quality sequences for downstream analysis. SWGA is a robust tool that will enable efficient, cost-effective WGS of P. vivax isolates from clinical samples that can be applied to other neglected microbial pathogens.

  8. Selective Whole-Genome Amplification Is a Robust Method That Enables Scalable Whole-Genome Sequencing of Plasmodium vivax from Unprocessed Clinical Samples

    PubMed Central

    Loy, Dorothy E.; Sundararaman, Sesh A.; Valdivia, Hugo; Fisch, Kathleen; Lescano, Andres G.; Baldeviano, G. Christian; Durand, Salomon; Gerbasi, Vince; Sutherland, Colin J.; Nolder, Debbie; Vinetz, Joseph M.; Hahn, Beatrice H.

    2017-01-01

    ABSTRACT Whole-genome sequencing (WGS) of microbial pathogens from clinical samples is a highly sensitive tool used to gain a deeper understanding of the biology, epidemiology, and drug resistance mechanisms of many infections. However, WGS of organisms which exhibit low densities in their hosts is challenging due to high levels of host genomic DNA (gDNA), which leads to very low coverage of the microbial genome. WGS of Plasmodium vivax, the most widely distributed form of malaria, is especially difficult because of low parasite densities and the lack of an ex vivo culture system. Current techniques used to enrich P. vivax DNA from clinical samples require significant resources or are not consistently effective. Here, we demonstrate that selective whole-genome amplification (SWGA) can enrich P. vivax gDNA from unprocessed human blood samples and dried blood spots for high-quality WGS, allowing genetic characterization of isolates that would otherwise have been prohibitively expensive or impossible to sequence. We achieved an average genome coverage of 24×, with up to 95% of the P. vivax core genome covered by ≥5 reads. The single-nucleotide polymorphism (SNP) characteristics and drug resistance mutations seen were consistent with those of other P. vivax sequences from a similar region in Peru, demonstrating that SWGA produces high-quality sequences for downstream analysis. SWGA is a robust tool that will enable efficient, cost-effective WGS of P. vivax isolates from clinical samples that can be applied to other neglected microbial pathogens. PMID:28174312

  9. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  10. The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000

    USGS Publications Warehouse

    Fitzpatrick-Lins, Katherine

    1980-01-01

    Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.

  11. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  12. Robust verification analysis

    SciTech Connect

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-15

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  13. The Role of Selected Lexical Factors on Confrontation Naming Accuracy, Speed, and Fluency in Adults Who Do and Do Not Stutter

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; Ratner, Nan Bernstein

    2007-01-01

    Purpose: The purpose of this study was to investigate whether lexical access in adults who stutter (AWS) differs from that in people who do not stutter. Specifically, the authors examined the role of 3 lexical factors on naming speed, accuracy, and fluency: word frequency, neighborhood density, and neighborhood frequency. If stuttering results…

  14. Robust GPS autonomous signal quality monitoring

    NASA Astrophysics Data System (ADS)

    Ndili, Awele Nnaemeka

    The Global Positioning System (GPS), introduced by the U.S. Department of Defense in 1973, provides unprecedented world-wide navigation capabilities through a constellation of 24 satellites in global orbit, each emitting a low-power radio-frequency signal for ranging. GPS receivers track these transmitted signals, computing position to within 30 meters from range measurements made to four satellites. GPS has a wide range of applications, including aircraft, marine and land vehicle navigation. Each application places demands on GPS for various levels of accuracy, integrity, system availability and continuity of service. Radio frequency interference (RFI), which results from natural sources such as TV/FM harmonics, radar or Mobile Satellite Systems (MSS), presents a challenge in the use of GPS, by posing a threat to the accuracy, integrity and availability of the GPS navigation solution. In order to use GPS for integrity-sensitive applications, it is therefore necessary to monitor the quality of the received signal, with the objective of promptly detecting the presence of RFI, and thus provide a timely warning of degradation of system accuracy. This presents a challenge, since the myriad kinds of RFI affect the GPS receiver in different ways. What is required then, is a robust method of detecting GPS accuracy degradation, which is effective regardless of the origin of the threat. This dissertation presents a new method of robust signal quality monitoring for GPS. Algorithms for receiver autonomous interference detection and integrity monitoring are demonstrated. Candidate test statistics are derived from fundamental receiver measurements of in-phase and quadrature correlation outputs, and the gain of the Active Gain Controller (AGC). Performance of selected test statistics are evaluated in the presence of RFI: broadband interference, pulsed and non-pulsed interference, coherent CW at different frequencies; and non-RFI: GPS signal fading due to physical blockage and

  15. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  16. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems

    NASA Technical Reports Server (NTRS)

    Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.

  17. Functional Characterization of a Robust Marine Microbial Esterase and Its Utilization in the Stereo-Selective Preparation of Ethyl (S)-3-Hydroxybutyrate.

    PubMed

    Wang, Yilong; Zhang, Yun; Hu, Yunfeng

    2016-11-01

    One novel microbial esterase PHE21 was cloned from the genome of Pseudomonas oryzihabitans HUP022 identified from the deep sea of the Western Pacific. PHE21 was heterologously expressed and functionally characterized to be a robust esterase which behaved high resistance to various metal ions, organic solvents, surfactants, and NaCl. Despite the fact that the two enantiomers of ethyl 3-hydroxybutyrate were hard to be enzymatically resolved before, we successfully resolved racemic ethyl 3-hydroxybutyrate through direct hydrolysis reactions and generated chiral ethyl (S)-3-hydroxybutyrate using esterase PHE21. After process optimization, the enantiomeric excess, the conversion rate, and the yield of desired product ethyl (S)-3-hydroxybutyrate could reach 99, 65, and 87 %, respectively. PHE21 is a novel marine microbial esterase with great potential in asymmetric synthesis as well as in other industries.

  18. Redshift-space distortions of galaxies, clusters, and AGN. Testing how the accuracy of growth rate measurements depends on scales and sample selections

    NASA Astrophysics Data System (ADS)

    Marulli, Federico; Veropalumbo, Alfonso; Moscardini, Lauro; Cimatti, Andrea; Dolag, Klaus

    2017-03-01

    Aims: Redshift-space clustering anisotropies caused by cosmic peculiar velocities provide a powerful probe to test the gravity theory on large scales. However, to extract unbiased physical constraints, the clustering pattern has to be modelled accurately, taking into account the effects of non-linear dynamics at small scales, and properly describing the link between the selected cosmic tracers and the underlying dark matter field. Methods: We used a large hydrodynamic simulation to investigate how the systematic error on the linear growth rate, f, caused by model uncertainties, depends on sample selections and co-moving scales. Specifically, we measured the redshift-space two-point correlation function of mock samples of galaxies, galaxy clusters and active galactic nuclei, extracted from the Magneticum simulation, in the redshift range 0.2 ≤ z ≤ 2, and adopting different sample selections. We estimated fσ8 by modelling both the monopole and the full two-dimensional anisotropic clustering, using the dispersion model. Results: We find that the systematic error on fσ8 depends significantly on the range of scales considered for the fit. If the latter is kept fixed, the error depends on both redshift and sample selection due to the scale-dependent impact of non-linearities if not properly modelled. Concurrently, we show that it is possible to achieve almost unbiased constraints on fσ8 provided that the analysis is restricted to a proper range of scales that depends non-trivially on the properties of the sample. This can have a strong impact on multiple tracer analyses, and when combining catalogues selected at different redshifts.

  19. A robust and luminescent covalent organic framework as a highly sensitive and selective sensor for the detection of Cu(2+) ions.

    PubMed

    Li, Zhongping; Zhang, Yuwei; Xia, Hong; Mu, Ying; Liu, Xiaoming

    2016-05-05

    A hydrogen bond assisted azine-linked covalent organic framework, COF-JLU3, was synthesized under solvothermal conditions. Combining excellent crystallinity, porosity, stability and luminescence, it can be the first COF as a fluorescent sensor for toxic metal ions, exhibiting high sensitivity and selectivity to Cu(2+).

  20. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    PubMed Central

    Chao, Lidia S.

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402

  1. An improved robust hand-eye calibration for endoscopy navigation system

    NASA Astrophysics Data System (ADS)

    He, Wei; Kang, Kumsok; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2016-03-01

    Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.

  2. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  3. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  4. Robust Weak Measurements

    NASA Astrophysics Data System (ADS)

    Tollaksen, Jeff; Aharonov, Yakir

    2006-03-01

    We introduce a new type of weak measurement which yields a quantum average of weak values that is robust, outside the range of eigenvalues, extends the valid regime for weak measurements, and for which the probability of obtaining the pre- and post-selected ensemble is not exponentially rare. This result extends the applicability of weak values, shifts the statistical interpretation previously attributed to weak values and suggests that the weak value is a property of every pre- and post-selected ensemble. We then apply this new weak measurement to Hardy's paradox. Usually the paradox is dismissed on grounds of counterfactuality, i.e., because the paradoxical effects appear only when one considers results of experiments which do not actually take place. We suggest a new set of measurements in connection with Hardy's scheme, and show that when they are actually performed, they yield strange and surprising outcomes. More generally, we claim that counterfactual paradoxes point to a deeper structure inherent to quantum mechanics characterized by weak values (Aharonov Y, Botero A, Popescu S, Reznik B, Tollaksen J, Physics Letters A, 301 (3-4): 130-138, 2002).

  5. An ant-plant by-product mutualism is robust to selective logging of rain forest and conversion to oil palm plantation.

    PubMed

    Fayle, Tom M; Edwards, David P; Foster, William A; Yusah, Kalsum M; Turner, Edgar C

    2015-06-01

    Anthropogenic disturbance and the spread of non-native species disrupt natural communities, but also create novel interactions between species. By-product mutualisms, in which benefits accrue as side effects of partner behaviour or morphology, are often non-specific and hence may persist in novel ecosystems. We tested this hypothesis for a two-way by-product mutualism between epiphytic ferns and their ant inhabitants in the Bornean rain forest, in which ants gain housing in root-masses while ferns gain protection from herbivores. Specifically, we assessed how the specificity (overlap between fern and ground-dwelling ants) and the benefits of this interaction are altered by selective logging and conversion to an oil palm plantation habitat. We found that despite the high turnover of ant species, ant protection against herbivores persisted in modified habitats. However, in ferns growing in the oil palm plantation, ant occupancy, abundance and species richness declined, potentially due to the harsher microclimate. The specificity of the fern-ant interactions was also lower in the oil palm plantation habitat than in the forest habitats. We found no correlations between colony size and fern size in modified habitats, and hence no evidence for partner fidelity feedbacks, in which ants are incentivised to protect fern hosts. Per species, non-native ant species in the oil palm plantation habitat (18 % of occurrences) were as important as native ones in terms of fern protection and contributed to an increase in ant abundance and species richness with fern size. We conclude that this by-product mutualism persists in logged forest and oil palm plantation habitats, with no detectable shift in partner benefits. Such persistence of generalist interactions in novel ecosystems may be important for driving ecosystem functioning.

  6. Shaping robust system through evolution

    NASA Astrophysics Data System (ADS)

    Kaneko, Kunihiko

    2008-06-01

    Biological functions are generated as a result of developmental dynamics that form phenotypes governed by genotypes. The dynamical system for development is shaped through genetic evolution following natural selection based on the fitness of the phenotype. Here we study how this dynamical system is robust to noise during development and to genetic change by mutation. We adopt a simplified transcription regulation network model to govern gene expression, which gives a fitness function. Through simulations of the network that undergoes mutation and selection, we show that a certain level of noise in gene expression is required for the network to acquire both types of robustness. The results reveal how the noise that cells encounter during development shapes any network's robustness, not only to noise but also to mutations. We also establish a relationship between developmental and mutational robustness through phenotypic variances caused by genetic variation and epigenetic noise. A universal relationship between the two variances is derived, akin to the fluctuation-dissipation relationship known in physics.

  7. Genomic selection & association mapping in rice: effect of trait genetic architecture, training population composition, marker number & statistical model on accuracy of rice genomic selection in elite, tropical rice breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its ef...

  8. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  9. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    SciTech Connect

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.; Vomel,Christof

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.

  10. Robust modular product family design

    NASA Astrophysics Data System (ADS)

    Jiang, Lan; Allada, Venkat

    2001-10-01

    This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.

  11. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  12. Understanding the Delayed-Keyword Effect on Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Dunlosky, John; Griffin, Thomas D.; Wiley, Jennifer

    2005-01-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M.…

  13. Robust Critical Point Detection

    SciTech Connect

    Bhatia, Harsh

    2016-07-28

    Robust Critical Point Detection is a software to compute critical points in a 2D or 3D vector field robustly. The software was developed as a part of the author's work at the lab as a Phd student under Livermore Scholar Program (now called Livermore Graduate Scholar Program).

  14. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  15. Enhancing and evaluating diagnostic accuracy.

    PubMed

    Swets, J A; Getty, D J; Pickett, R M; D'Orsi, C J; Seltzer, S E; McNeil, B J

    1991-01-01

    Techniques that may enhance diagnostic accuracy in clinical settings were tested in the context of mammography. Statistical information about the relevant features among those visible in a mammogram and about their relative importances in the diagnosis of breast cancer was the basis of two decision aids for radiologists: a checklist that guides the radiologist in assigning a scale value to each significant feature of the images of a particular case, and a computer program that merges those scale values optimally to estimate a probability of malignancy. A test set of approximately 150 proven cases (including normals and benign and malignant lesions) was interpreted by six radiologists, first in their usual manner and later with the decision aids. The enhancing effect of these feature-analytic techniques was analyzed across subsets of cases that were restricted progressively to more and more difficult cases, where difficulty was defined in terms of the radiologists' judgements in the standard reading condition. Accuracy in both standard and enhanced conditions decreased regularly and substantially as case difficulty increased, but differentially, such that the enhancement effect grew regularly and substantially. For the most difficult case sets, the observed increases in accuracy translated into an increase of about 0.15 in sensitivity (true-positive proportion) for a selected specificity (true-negative proportion) of 0.85 or a similar increase in specificity for a selected sensitivity of 0.85. That measured accuracy can depend on case-set difficulty to different degrees for two diagnostic approaches has general implications for evaluation in clinical medicine. Comparative, as well as absolute, assessments of diagnostic performances--for example, of alternative imaging techniques--may be distorted by inadequate treatments of this experimental variable. Subset analysis, as defined and illustrated here, can be useful in alleviating the problem.

  16. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  17. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  18. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  19. Manifold learning for robust classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Kim, Wonkook

    of manifold learning algorithms for large scale remote sensing data. Approximation approaches such as the Nystrom methods are employed to mitigate the computation burden, where a set of landmark samples is first selected for the construction of the approximate manifolds, and the remaining samples are then linearly embedded in the manifold. While various landmark selection schemes are possible (e.g. random selection, clustering based approaches), spatially representative samples that are potentially relevant to data on grids can be obtained if the spatial context is considered in the selection scheme. A framework for representing the spatial coherence of samples is proposed using the kernel feature extraction framework. The proposed method produces a set of new features in which a unique spatial coherence pattern for homogeneous regions is captured in the individual features, which yield high classification accuracies and qualitatively superior results. Finally, an adaptive classification framework that exploits manifolds is proposed to obtain robust classification results for hyperspectral data. Spectral signatures can vary significantly across extended areas, often resulting in poor classification of land cover. The proposed adaptive framework employs a manifold regularization classifier, where the classifier is trained with labeled samples in one location and adapted to samples in spatially disjoint areas that exhibit significantly different distributions. In experimental studies, classification accuracies were higher for the proposed approach than for other kernel based semi-supervised classification methods.

  20. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  1. Robust gene signatures from microarray data using genetic algorithms enriched with biological pathway keywords.

    PubMed

    Luque-Baena, R M; Urda, D; Gonzalo Claros, M; Franco, L; Jerez, J M

    2014-06-01

    Genetic algorithms are widely used in the estimation of expression profiles from microarrays data. However, these techniques are unable to produce stable and robust solutions suitable to use in clinical and biomedical studies. This paper presents a novel two-stage evolutionary strategy for gene feature selection combining the genetic algorithm with biological information extracted from the KEGG database. A comparative study is carried out over public data from three different types of cancer (leukemia, lung cancer and prostate cancer). Even though the analyses only use features having KEGG information, the results demonstrate that this two-stage evolutionary strategy increased the consistency, robustness and accuracy of a blind discrimination among relapsed and healthy individuals. Therefore, this approach could facilitate the definition of gene signatures for the clinical prognosis and diagnostic of cancer diseases in a near future. Additionally, it could also be used for biological knowledge discovery about the studied disease.

  2. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  3. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  4. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  5. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  6. [Effect of algorithms for calibration set selection on quantitatively determining asiaticoside content in Centella total glucosides by near infrared spectroscopy].

    PubMed

    Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang

    2014-12-01

    The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.

  7. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  8. Robust Control Systems.

    DTIC Science & Technology

    1981-12-01

    106 A. 13 XSU ......................................... 108 A.14 DDTCON...................................... 108 A.15 DKFTR...operation is preserved. Although some papers (Refs 6 and 13 ) deal with robustness only in regard to parameter variations within the basic controlled...since these can ofter be neglected in actual implementation, a constant-gain time 13 ........................................ invariant solution with

  9. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  10. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  11. Robustness, generality and efficiency of optimization algorithms in practical applications

    NASA Technical Reports Server (NTRS)

    Thanedar, P. B.; Arora, J. S.; Li, G. Y.; Lin, T. C.

    1990-01-01

    The theoretical foundations of two approaches, sequential quadratic programming (SQP) and optimality criteria (OC), are analyzed and compared, with emphasis on the critical importance of parameters such as accuracy, generality, robustness, efficiency, and ease of use in large scale structural optimization. A simplified fighter wing and active control of space structures are considered with other example problems. When applied to general system identification problems, the OC methods are shown to lose simplicity and demonstrate lack of generality, accuracy and robustness. It is concluded that the SQP method with a potential constraint strategy is a better choice as compared to the currently prevalent mathematical programming and OC approaches.

  12. A robust multi-frame image blind deconvolution algorithm via total variation

    NASA Astrophysics Data System (ADS)

    Zhou, Haiyang; Xia, Guo; Liu, Qianshun; Yu, Feihong

    2015-10-01

    Image blind deconvolution is a more practical inverse problem in modern imaging sciences including consumer photography, astronomical imaging, medical imaging, and microscopy imaging. Among all of the latest blind deconvolution algorithms, the total variation based method provides privilege for large blur kernel. However, the computation cost is heavy and it does not handle the estimated kernel error properly. Otherwise, the using of the whole image to estimate the blur kernel is inaccurate because of that the insufficient edges information will hazard the accuracy of estimation. Here, we proposed a robust multi-frame images blind deconvolution algorithm to handle this complicated imaging model and applying it to the engineering community. In our proposed method, we induced the patch and kernel selection scheme to selecting the effective patch to estimate the kernel without using the whole image; then an total variation based kernel estimation algorithm was proposed to estimate the kernel; after the estimation of blur kernels, a new kernel refinement scheme was applied to refine the pre-estimated multi-frame estimated kernels; finally, a robust non-blind deconvolution method was implemented to recover the final latent sharp image with the refined blur kernel. Objective experiments on both synthesized and real images evaluate the efficiency and robustness of our algorithm and illustrate that this approach not only have rapid convergence but also can effectively recover high quality latent image from multi-blurry images.

  13. Comparing dependent robust correlations.

    PubMed

    Wilcox, Rand R

    2016-11-01

    Let r1 and r2 be two dependent estimates of Pearson's correlation. There is a substantial literature on testing H0  : ρ1  = ρ2 , the hypothesis that the population correlation coefficients are equal. However, it is well known that Pearson's correlation is not robust. Even a single outlier can have a substantial impact on Pearson's correlation, resulting in a misleading understanding about the strength of the association among the bulk of the points. A way of mitigating this concern is to use a correlation coefficient that guards against outliers, many of which have been proposed. But apparently there are no results on how to compare dependent robust correlation coefficients when there is heteroscedasicity. Extant results suggest that a basic percentile bootstrap will perform reasonably well. This paper reports simulation results indicating the extent to which this is true when using Spearman's rho, a Winsorized correlation or a skipped correlation.

  14. Robustness in bacterial chemotaxis

    NASA Astrophysics Data System (ADS)

    Alon, U.; Surette, M. G.; Barkai, N.; Leibler, S.

    1999-01-01

    Networks of interacting proteins orchestrate the responses of living cells to a variety of external stimuli, but how sensitive is the functioning of these protein networks to variations in theirbiochemical parameters? One possibility is that to achieve appropriate function, the reaction rate constants and enzyme concentrations need to be adjusted in a precise manner, and any deviation from these `fine-tuned' values ruins the network's performance. An alternative possibility is that key properties of biochemical networks are robust; that is, they are insensitive to the precise values of the biochemical parameters. Here we address this issue in experiments using chemotaxis of Escherichia coli, one of the best-characterized sensory systems,. We focus on how response and adaptation to attractant signals vary with systematic changes in the intracellular concentration of the components of the chemotaxis network. We find that some properties, such as steady-state behaviour and adaptation time, show strong variations in response to varying protein concentrations. In contrast, the precision of adaptation is robust and does not vary with the protein concentrations. This is consistent with a recently proposed molecular mechanism for exact adaptation, where robustness is a direct consequence of the network's architecture.

  15. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  16. Robustness of Interdependent Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    2011-03-01

    In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.

  17. Towards designing robust coupled networks.

    PubMed

    Schneider, Christian M; Yazdani, Nuri; Araújo, Nuno A M; Havlin, Shlomo; Herrmann, Hans J

    2013-01-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy.

  18. Towards designing robust coupled networks

    PubMed Central

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-01-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy. PMID:23752705

  19. Towards designing robust coupled networks

    NASA Astrophysics Data System (ADS)

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-06-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy.

  20. Guidance accuracy considerations for realtime GPS interferometry

    NASA Technical Reports Server (NTRS)

    Braasch, Michael S.; Van Graas, Frank

    1991-01-01

    During April and May of 1991, the Avionics Engineering Center at Ohio University completed the first set of realtime flight tests of a GPS interferometric attitude and heading determination system. This technique has myriad applications for aircraft and spacecraft guidance and control. However, before these applications can be further developed, a number of guidance accuracy issues must be considered. Among these are: signal derogation due to multipath and shadowing, effects of structural flexures, and system robustness during loss of phase lock. This paper addresses these issues with special emphasis on the information content of the GPS signal, and characterization and mitigation of multipath encountered while in flight.

  1. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  2. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  3. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks

    PubMed Central

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-01-01

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537

  4. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.

    PubMed

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-03-20

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.

  5. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  6. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  7. Complexity and robustness

    PubMed Central

    Carlson, J. M.; Doyle, John

    2002-01-01

    Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207

  8. Robustness of Cantor diffractals.

    PubMed

    Verma, Rupesh; Sharma, Manoj Kumar; Banerjee, Varsha; Senthilkumaran, Paramasivam

    2013-04-08

    Diffractals are electromagnetic waves diffracted by a fractal aperture. In an earlier paper, we reported an important property of Cantor diffractals, that of redundancy [R. Verma et. al., Opt. Express 20, 8250 (2012)]. In this paper, we report another important property, that of robustness. The question we address is: How much disorder in the Cantor grating can be accommodated by diffractals to continue to yield faithfully its fractal dimension and generator? This answer is of consequence in a number of physical problems involving fractal architecture.

  9. Neutral evolution of robustness in Drosophila microRNA precursors.

    PubMed

    Price, Nicholas; Cartwright, Reed A; Sabath, Niv; Graur, Dan; Azevedo, Ricardo B R

    2011-07-01

    Mutational robustness describes the extent to which a phenotype remains unchanged in the face of mutations. Theory predicts that the strength of direct selection for mutational robustness is at most the magnitude of the rate of deleterious mutation. As far as nucleic acid sequences are concerned, only long sequences in organisms with high deleterious mutation rates and large population sizes are expected to evolve mutational robustness. Surprisingly, recent studies have concluded that molecules that meet none of these conditions--the microRNA precursors (pre-miRNAs) of multicellular eukaryotes--show signs of selection for mutational and/or environmental robustness. To resolve the apparent disagreement between theory and these studies, we have reconstructed the evolutionary history of Drosophila pre-miRNAs and compared the robustness of each sequence to that of its reconstructed ancestor. In addition, we "replayed the tape" of pre-miRNA evolution via simulation under different evolutionary assumptions and compared these alternative histories with the actual one. We found that Drosophila pre-miRNAs have evolved under strong purifying selection against changes in secondary structure. Contrary to earlier claims, there is no evidence that these RNAs have been shaped by either direct or congruent selection for any kind of robustness. Instead, the high robustness of Drosophila pre-miRNAs appears to be mostly intrinsic and likely a consequence of selection for functional structures.

  10. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R

  11. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  12. Swarm intelligence based wavelet coefficient feature selection for mass spectral classification: an application to proteomics data.

    PubMed

    Zhao, Weixiang; Davis, Cristina E

    2009-09-28

    This paper introduces the ant colony algorithm, a novel swarm intelligence based optimization method, to select appropriate wavelet coefficients from mass spectral data as a new feature selection method for ovarian cancer diagnostics. By determining the proper parameters for the ant colony algorithm (ACA) based searching algorithm, we perform the feature searching process for 100 times with the number of selected features fixed at 5. The results of this study show: (1) the classification accuracy based on the five selected wavelet coefficients can reach up to 100% for all the training, validating and independent testing sets; (2) the eight most popular selected wavelet coefficients of the 100 runs can provide 100% accuracy for the training set, 100% accuracy for the validating set, and 98.8% accuracy for the independent testing set, which suggests the robustness and accuracy of the proposed feature selection method; and (3) the mass spectral data corresponding to the eight popular wavelet coefficients can be located by reverse wavelet transformation and these located mass spectral data still maintain high classification accuracies (100% for the training set, 97.6% for the validating set, and 98.8% for the testing set) and also provide sufficient physical and medical meaning for future ovarian cancer mechanism studies. Furthermore, the corresponding mass spectral data (potential biomarkers) are in good agreement with other studies which have used the same sample set. Together these results suggest this feature extraction strategy will benefit the development of intelligent and real-time spectroscopy instrumentation based diagnosis and monitoring systems.

  13. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  14. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  15. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  16. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  17. DT-CWT Robust Filtering Algorithm for The Extraction of Reference and Waviness from 3-D Nano Scalar Surfaces

    NASA Astrophysics Data System (ADS)

    Ren, Zhi Ying.; Gao, ChengHui.; Han, GuoQiang.; Ding, Shen; Lin, JianXing.

    2014-04-01

    Dual tree complex wavelet transform (DT-CWT) exhibits superiority of shift invariance, directional selectivity, perfect reconstruction (PR), and limited redundancy and can effectively separate various surface components. However, in nano scale the morphology contains pits and convexities and is more complex to characterize. This paper presents an improved approach which can simultaneously separate reference and waviness and allows an image to remain robust against abnormal signals. We included a bilateral filtering (BF) stage in DT-CWT to solve imaging problems. In order to verify the feasibility of the new method and to test its performance we used a computer simulation based on three generations of Wavelet and Improved DT-CWT and we conducted two case studies. Our results show that the improved DT-CWT not only enhances the robustness filtering under the conditions of abnormal interference, but also possesses accuracy and reliability of the reference and waviness from the 3-D nano scalar surfaces.

  18. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  19. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  20. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  1. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  2. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals.

  3. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  4. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  5. A robust meta-classification strategy for cancer diagnosis from gene expression data.

    PubMed

    Alexe, Gabriela; Bhanot, Gyan; Venkataraghavan, Babu; Ramaswamy, Ramakrishna; Lepre, Jorge; Levine, Arnold J; Stolovitzky, Gustavo

    2005-01-01

    One of the major challenges in cancer diagnosis from microarray data is to develop robust classification models which are independent of the analysis techniques used and can combine data from different laboratories. We propose a meta-classification scheme which uses a robust multivariate gene selection procedure and integrates the results of several machine learning tools trained on raw and pattern data. We validate our method by applying it to distinguish diffuse large B-cell lymphoma (DLBCL) from follicular lymphoma (FL) on two independent datasets: the HuGeneFL Affmetrixy dataset of Shipp et al. (www. genome.wi.mit.du/MPR /lymphoma) and the Hu95Av2 Affymetrix dataset (DallaFavera's laboratory, Columbia University). Our meta-classification technique achieves higher predictive accuracies than each of the individual classifiers trained on the same dataset and is robust against various data perturbations. We also find that combinations of p53 responsive genes (e.g., p53, PLK1 and CDK2) are highly predictive of the phenotype.

  6. Robust Detection of Impaired Resting State Functional Connectivity Networks in Alzheimer's Disease Using Elastic Net Regularized Regression

    PubMed Central

    Teipel, Stefan J.; Grothe, Michel J.; Metzger, Coraline D.; Grimmer, Timo; Sorg, Christian; Ewers, Michael; Franzmeier, Nicolai; Meisenzahl, Eva; Klöppel, Stefan; Borchardt, Viola; Walter, Martin; Dyrba, Martin

    2017-01-01

    The large number of multicollinear regional features that are provided by resting state (rs) fMRI data requires robust feature selection to uncover consistent networks of functional disconnection in Alzheimer's disease (AD). Here, we compared elastic net regularized and classical stepwise logistic regression in respect to consistency of feature selection and diagnostic accuracy using rs-fMRI data from four centers of the “German resting-state initiative for diagnostic biomarkers” (psymri.org), comprising 53 AD patients and 118 age and sex matched healthy controls. Using all possible pairs of correlations between the time series of rs-fMRI signal from 84 functionally defined brain regions as the initial set of predictor variables, we calculated accuracy of group discrimination and consistency of feature selection with bootstrap cross-validation. Mean areas under the receiver operating characteristic curves as measure of diagnostic accuracy were 0.70 in unregularized and 0.80 in regularized regression. Elastic net regression was insensitive to scanner effects and recovered a consistent network of functional connectivity decline in AD that encompassed parts of the dorsal default mode as well as brain regions involved in attention, executive control, and language processing. Stepwise logistic regression found no consistent network of AD related functional connectivity decline. Regularized regression has high potential to increase diagnostic accuracy and consistency of feature selection from multicollinear functional neuroimaging data in AD. Our findings suggest an extended network of functional alterations in AD, but the diagnostic accuracy of rs-fMRI in this multicenter setting did not reach the benchmark defined for a useful biomarker of AD. PMID:28101051

  7. Dynamics robustness of cascading systems.

    PubMed

    Young, Jonathan T; Hatakeyama, Tetsuhiro S; Kaneko, Kunihiko

    2017-03-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  8. Dynamics robustness of cascading systems

    PubMed Central

    Kaneko, Kunihiko

    2017-01-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade’s kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  9. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  10. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  11. Robust neuronal dynamics in premotor cortex during motor planning

    PubMed Central

    Li, Nuo; Daie, Kayvon; Svoboda, Karel; Druckmann, Shaul

    2016-01-01

    Neural activity maintains representations that bridge past and future events, often over many seconds. Network models can produce persistent and ramping activity, but the positive feedback that is critical for these slow dynamics can cause sensitivity to perturbations. Here we use electrophysiology and optogenetic perturbations in mouse premotor cortex to probe robustness of persistent neural representations during motor planning. Preparatory activity is remarkably robust to large-scale unilateral silencing: detailed neural dynamics that drive specific future movements were quickly and selectively restored by the network. Selectivity did not recover after bilateral silencing of premotor cortex. Perturbations to one hemisphere are thus corrected by information from the other hemisphere. Corpus callosum bisections demonstrated that premotor cortex hemispheres can maintain preparatory activity independently. Redundancy across selectively coupled modules, as we observed in premotor cortex, is a hallmark of robust control systems. Network models incorporating these principles show robustness that is consistent with data. PMID:27074502

  12. Feature Selection Based on High Dimensional Model Representation for Hyperspectral Images.

    PubMed

    Taskin Kaya, Gulsen; Kaya, Huseyin; Bruzzone, Lorenzo

    2017-03-24

    In hyperspectral image analysis, the classification task has generally been addressed jointly with dimensionality reduction due to both the high correlation between the spectral features and the noise present in spectral bands which might significantly degrade classification performance. In supervised classification, limited training instances in proportion to the number of spectral features have negative impacts on the classification accuracy, which has known as Hughes effects or curse of dimensionality in the literature. In this paper, we focus on dimensionality reduction problem, and propose a novel feature-selection algorithm which is based on the method called High Dimensional Model Representation. The proposed algorithm is tested on some toy examples and hyperspectral datasets in comparison to conventional feature-selection algorithms in terms of classification accuracy, stability of the selected features and computational time. The results showed that the proposed approach provides both high classification accuracy and robust features with a satisfactory computational time.

  13. Robust relativistic bit commitment

    NASA Astrophysics Data System (ADS)

    Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony

    2016-12-01

    Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.

  14. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  15. High accuracy fuel flowmeter

    NASA Technical Reports Server (NTRS)

    1986-01-01

    All three flowmeter concepts (vortex, dual turbine, and angular momentum) were subjected to experimental and analytical investigation to determine the potential portotype performance. The three concepts were subjected to a comprehensive rating. Eight parameters of performance were evaluated on a zero-to-ten scale, weighted, and summed. The relative ratings of the vortex, dual turbine, and angular momentum flowmeters are 0.71, 1.00, and 0.95, respectively. The dual turbine flowmeter concept was selected as the primary candidate and the angular momentum flowmeter as the secondary candidate for prototype development and evaluation.

  16. Step Detection Robust against the Dynamics of Smartphones

    PubMed Central

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  17. Robust Control Feedback and Learning

    DTIC Science & Technology

    2002-11-30

    98-1-0026 5b. GRANT NUMBER Robust Control, Feedback and Learning F49620-98-1-0026 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Michael G...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Final Report: ROBUST CONTROL FEEDBACK AND LEARNING AFOSR Grant F49620-98-1-0026 October 1...Philadelphia, PA, 2000. [16] M. G. Safonov. Recent advances in robust control, feedback and learning . In S. 0. R. Moheimani, editor, Perspectives in Robust

  18. Robustness surfaces of complex networks

    NASA Astrophysics Data System (ADS)

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  19. Robust Face Sketch Style Synthesis.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li

    2016-01-01

    Heterogeneous image conversion is a critical issue in many computer vision tasks, among which example-based face sketch style synthesis provides a convenient way to make artistic effects for photos. However, existing face sketch style synthesis methods generate stylistic sketches depending on many photo-sketch pairs. This requirement limits the generalization ability of these methods to produce arbitrarily stylistic sketches. To handle such a drawback, we propose a robust face sketch style synthesis method, which can convert photos to arbitrarily stylistic sketches based on only one corresponding template sketch. In the proposed method, a sparse representation-based greedy search strategy is first applied to estimate an initial sketch. Then, multi-scale features and Euclidean distance are employed to select candidate image patches from the initial estimated sketch and the template sketch. In order to further refine the obtained candidate image patches, a multi-feature-based optimization model is introduced. Finally, by assembling the refined candidate image patches, the completed face sketch is obtained. To further enhance the quality of synthesized sketches, a cascaded regression strategy is adopted. Compared with the state-of-the-art face sketch synthesis methods, experimental results on several commonly used face sketch databases and celebrity photos demonstrate the effectiveness of the proposed method.

  20. Accuracy of administrative data for surveillance of healthcare-associated infections: a systematic review

    PubMed Central

    van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M

    2015-01-01

    Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued

  1. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing.

  2. Robust control algorithms for Mars aerobraking

    NASA Astrophysics Data System (ADS)

    Shipley, Buford W., Jr.; Ward, Donald T.

    Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.

  3. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  4. Understanding the delayed-keyword effect on metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  5. Robust reflective pupil slicing technology

    NASA Astrophysics Data System (ADS)

    Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.

    2014-07-01

    Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly

  6. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  7. Identity Recognition Algorithm Using Improved Gabor Feature Selection of Gait Energy Image

    NASA Astrophysics Data System (ADS)

    Chao, LIANG; Ling-yao, JIA; Dong-cheng, SHI

    2017-01-01

    This paper describes an effective gait recognition approach based on Gabor features of gait energy image. In this paper, the kernel Fisher analysis combined with kernel matrix is proposed to select dominant features. The nearest neighbor classifier based on whitened cosine distance is used to discriminate different gait patterns. The approach proposed is tested on the CASIA and USF gait databases. The results show that our approach outperforms other state of gait recognition approaches in terms of recognition accuracy and robustness.

  8. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  9. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  10. Facial symmetry in robust anthropometrics.

    PubMed

    Kalina, Jan

    2012-05-01

    Image analysis methods commonly used in forensic anthropology do not have desirable robustness properties, which can be ensured by robust statistical methods. In this paper, the face localization in images is carried out by detecting symmetric areas in the images. Symmetry is measured between two neighboring rectangular areas in the images using a new robust correlation coefficient, which down-weights regions in the face violating the symmetry. Raw images of faces without usual preliminary transformations are considered. The robust correlation coefficient based on the least weighted squares regression yields very promising results also in the localization of such faces, which are not entirely symmetric. Standard methods of statistical machine learning are applied for comparison. The robust correlation analysis can be applicable to other problems of forensic anthropology.

  11. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  12. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.

  13. Robust Nonnegative Patch Alignment for Dimensionality Reduction.

    PubMed

    You, Xinge; Ou, Weihua; Chen, Chun Lung Philip; Li, Qiang; Zhu, Ziqi; Tang, Yuanyan

    2015-11-01

    Dimensionality reduction is an important method to analyze high-dimensional data and has many applications in pattern recognition and computer vision. In this paper, we propose a robust nonnegative patch alignment for dimensionality reduction, which includes a reconstruction error term and a whole alignment term. We use correntropy-induced metric to measure the reconstruction error, in which the weight is learned adaptively for each entry. For the whole alignment, we propose locality-preserving robust nonnegative patch alignment (LP-RNA) and sparsity-preserviing robust nonnegative patch alignment (SP-RNA), which are unsupervised and supervised, respectively. In the LP-RNA, we propose a locally sparse graph to encode the local geometric structure of the manifold embedded in high-dimensional space. In particular, we select large p -nearest neighbors for each sample, then obtain the sparse representation with respect to these neighbors. The sparse representation is used to build a graph, which simultaneously enjoys locality, sparseness, and robustness. In the SP-RNA, we simultaneously use local geometric structure and discriminative information, in which the sparse reconstruction coefficient is used to characterize the local geometric structure and weighted distance is used to measure the separability of different classes. For the induced nonconvex objective function, we formulate it into a weighted nonnegative matrix factorization based on half-quadratic optimization. We propose a multiplicative update rule to solve this function and show that the objective function converges to a local optimum. Several experimental results on synthetic and real data sets demonstrate that the learned representation is more discriminative and robust than most existing dimensionality reduction methods.

  14. Inorganic Adhesives for Robust Superwetting Surfaces.

    PubMed

    Liu, Mingming; Li, Jing; Hou, Yuanyuan; Guo, Zhiguang

    2017-01-24

    Superwetting surfaces require micro-/nanohierarchical structures but are mechanically weak. Moreover, such surfaces are easily polluted by amphiphiles. In this work, inorganic adhesives are presented as a building block for construction of superwetting surfaces and to promote robustness. Nanomaterials can be selected as fillers to endow the functions. We adopted a simple procedure to fabricate underwater superoleophobic surfaces by spraying a titanium dioxide suspension combined with aluminum phosphate binder on stainless steel meshes. The surfaces maintained their excellent performance in regard to oil repellency under water, oil/water separation, and self-cleaning properties after even 100 abrasion cycles with sandpaper. Robust superwetting surfaces favored by inorganic adhesives can be extended to other nanoparticles and substrates, which are potentially advantageous in practical applications.

  15. Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair

    NASA Astrophysics Data System (ADS)

    Sasou, Akira; Kojima, Hiroaki

    2009-12-01

    Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.

  16. Robust multi-objective optimization of state feedback controllers for heat exchanger system with probabilistic uncertainty

    NASA Astrophysics Data System (ADS)

    Lotfi, Babak; Wang, Qiuwang

    2013-07-01

    The performance of thermal control systems has, in recent years, improved in numerous ways due to developments in control theory and information technology. The shell-and-tube heat exchanger (STHX) is a medium where heat transfer process occurred. The accuracy of the heat exchanger depends on the performance of both elements. Therefore, both components need to be controlled in order to achieve a substantial result in the process. For this purpose, the actual dynamics of both shell and tube of the heat exchanger is crucial. In this paper, optimal reliability-based multi-objective Pareto design of robust state feedback controllers for a STHX having parameters with probabilistic uncertainties. Accordingly, the probabilities of failure of those objective functions are also considered in the reliability-based design optimization (RBDO) approach. A new multi-objective uniform-diversity genetic algorithm (MUGA) is presented and used for Pareto optimum design of linear state feedback controllers for STHX problem. In this way, Pareto front of optimum controllers is first obtained for the nominal deterministic STHX using the conflicting objective functions in time domain. Such Pareto front is then obtained for STHX having probabilistic uncertainties in its parameters using the statistical moments of those objective functions through a Hammersley Sequence Sampling (HSS) approach. It is shown that multi-objective reliability-based Pareto optimization of the robust state feedback controllers using MUGA includes those that may be obtained by various crisp threshold values of probability of failures and, thus, remove the difficulty of selecting suitable crisp values. Besides, the multi-objective Pareto optimization of such robust feedback controllers using MUGA unveils some very important and informative trade-offs among those objective functions. Consequently, some optimum robust state feedback controllers can be compromisingly chosen from the Pareto frontiers.

  17. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  18. Robust Registration of Dynamic Facial Sequences.

    PubMed

    Sariyanidi, Evangelos; Gunes, Hatice; Cavallaro, Andrea

    2017-04-01

    Accurate face registration is a key step for several image analysis applications. However, existing registration methods are prone to temporal drift errors or jitter among consecutive frames. In this paper, we propose an iterative rigid registration framework that estimates the misalignment with trained regressors. The input of the regressors is a robust motion representation that encodes the motion between a misaligned frame and the reference frame(s), and enables reliable performance under non-uniform illumination variations. Drift errors are reduced when the motion representation is computed from multiple reference frames. Furthermore, we use the L2 norm of the representation as a cue for performing coarse-to-fine registration efficiently. Importantly, the framework can identify registration failures and correct them. Experiments show that the proposed approach achieves significantly higher registration accuracy than the state-of-the-art techniques in challenging sequences.

  19. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  20. Simple Robust Fixed Lag Smoothing

    DTIC Science & Technology

    1988-12-02

    SIMPLE ROBUST FIXED LAG SMOOTHING by ~N. D. Le R.D. Martin 4 TECHNICAL RlEPORT No. 149 December 1988 Department of Statistics, GN-22 Accesion For...frLsD1ist Special A- Z Simple Robust Fixed Lag Smoothing With Application To Radar Glint Noise * N. D. Le R. D. Martin Department of Statistics, GN...smoothers. The emphasis here is on fixed-lag smoothing , as opposed to the use of existing robust fixed interval smoothers (e.g., as in Martin, 1979

  1. Speed and Accuracy in Shallow and Deep Stochastic Parsing

    DTIC Science & Technology

    2004-01-01

    Abstract This paper reports some experiments that com- pare the accuracy and performance of two stochastic parsing systems. The currently pop- ular...deep linguistic grammars are too difficult to produce, lack coverage and robustness, and also have poor run-time performance . The Collins parser is... PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Palo Alto Research Center,3333 Coyote Hill Road,Palo Alto,CA,94304 8. PERFORMING ORGANIZATION REPORT NUMBER

  2. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  3. Building robust conservation plans.

    PubMed

    Visconti, Piero; Joppa, Lucas

    2015-04-01

    Systematic conservation planning optimizes trade-offs between biodiversity conservation and human activities by accounting for socioeconomic costs while aiming to achieve prescribed conservation objectives. However, the most cost-efficient conservation plan can be very dissimilar to any other plan achieving the set of conservation objectives. This is problematic under conditions of implementation uncertainty (e.g., if all or part of the plan becomes unattainable). We determined through simulations of parallel implementation of conservation plans and habitat loss the conditions under which optimal plans have limited chances of implementation and where implementation attempts would fail to meet objectives. We then devised a new, flexible method for identifying conservation priorities and scheduling conservation actions. This method entails generating a number of alternative plans, calculating the similarity in site composition among all plans, and selecting the plan with the highest density of neighboring plans in similarity space. We compared our method with the classic method that maximizes cost efficiency with synthetic and real data sets. When implementation was uncertain--a common reality--our method provided higher likelihood of achieving conservation targets. We found that χ, a measure of the shortfall in objectives achieved by a conservation plan if the plan could not be implemented entirely, was the main factor determining the relative performance of a flexibility enhanced approach to conservation prioritization. Our findings should help planning authorities prioritize conservation efforts in the face of uncertainty about future condition and availability of sites.

  4. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  5. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  6. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  7. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Tradeoff on Phenotype Robustness in Biological Networks Part II: Ecological Networks.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    In ecological networks, network robustness should be large enough to confer intrinsic robustness for tolerating intrinsic parameter fluctuations, as well as environmental robustness for resisting environmental disturbances, so that the phenotype stability of ecological networks can be maintained, thus guaranteeing phenotype robustness. However, it is difficult to analyze the network robustness of ecological systems because they are complex nonlinear partial differential stochastic systems. This paper develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance sensitivity in ecological networks. We found that the phenotype robustness criterion for ecological networks is that if intrinsic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations and environmental disturbances. These results in robust ecological networks are similar to that in robust gene regulatory networks and evolutionary networks even they have different spatial-time scales.

  8. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  9. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  10. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  11. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  12. Accuracy of genotype imputation in sheep breeds.

    PubMed

    Hayes, B J; Bowman, P J; Daetwyler, H D; Kijas, J W; van der Werf, J H J

    2012-02-01

    Although genomic selection offers the prospect of improving the rate of genetic gain in meat, wool and dairy sheep breeding programs, the key constraint is likely to be the cost of genotyping. Potentially, this constraint can be overcome by genotyping selection candidates for a low density (low cost) panel of SNPs with sparse genotype coverage, imputing a much higher density of SNP genotypes using a densely genotyped reference population. These imputed genotypes would then be used with a prediction equation to produce genomic estimated breeding values. In the future, it may also be desirable to impute very dense marker genotypes or even whole genome re-sequence data from moderate density SNP panels. Such a strategy could lead to an accurate prediction of genomic estimated breeding values across breeds, for example. We used genotypes from 48 640 (50K) SNPs genotyped in four sheep breeds to investigate both the accuracy of imputation of the 50K SNPs from low density SNP panels, as well as prospects for imputing very dense or whole genome re-sequence data from the 50K SNPs (by leaving out a small number of the 50K SNPs at random). Accuracy of imputation was low if the sparse panel had less than 5000 (5K) markers. Across breeds, it was clear that the accuracy of imputing from sparse marker panels to 50K was higher if the genetic diversity within a breed was lower, such that relationships among animals in that breed were higher. The accuracy of imputation from sparse genotypes to 50K genotypes was higher when the imputation was performed within breed rather than when pooling all the data, despite the fact that the pooled reference set was much larger. For Border Leicesters, Poll Dorsets and White Suffolks, 5K sparse genotypes were sufficient to impute 50K with 80% accuracy. For Merinos, the accuracy of imputing 50K from 5K was lower at 71%, despite a large number of animals with full genotypes (2215) being used as a reference. For all breeds, the relationship of

  13. Highly Fluorinated Ir(III)-2,2':6',2″-Terpyridine-Phenylpyridine-X Complexes via Selective C-F Activation: Robust Photocatalysts for Solar Fuel Generation and Photoredox Catalysis.

    PubMed

    Porras, Jonathan A; Mills, Isaac N; Transue, Wesley J; Bernhard, Stefan

    2016-08-03

    A series of fluorinated Ir(III)-terpyridine-phenylpyridine-X (X = anionic monodentate ligand) complexes were synthesized by selective C-F activation, whereby perfluorinated phenylpyridines were readily complexed. The combination of fluorinated phenylpyridine ligands with an electron-rich tri-tert-butyl terpyridine ligand generates a "push-pull" force on the electrons upon excitation, imparting significant enhancements to the stability, electrochemical, and photophysical properties of the complexes. Application of the complexes as photosensitizers for photocatalytic generation of hydrogen from water and as redox photocatalysts for decarboxylative fluorination of several carboxylic acids showcases the performance of the complexes in highly coordinating solvents, in some cases exceeding that of the leading photosensitizers. Changes in the photophysical properties and the nature of the excited states are observed as the compounds increase in fluorination as well as upon exchange of the ancillary chloride ligand to a cyanide. These changes in the excited states have been corroborated using density functional theory modeling.

  14. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    PubMed

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2016-08-29

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute settingMethods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented.Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care.Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted.What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type.What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting.What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has the

  15. Robust electrocardiogram (ECG) beat classification using discrete wavelet transform.

    PubMed

    Minhas, Fayyaz-ul-Amir Afsar; Arif, Muhammad

    2008-05-01

    This paper presents a robust technique for the classification of six types of heartbeats through an electrocardiogram (ECG). Features extracted from the QRS complex of the ECG using a wavelet transform along with the instantaneous RR-interval are used for beat classification. The wavelet transform utilized for feature extraction in this paper can also be employed for QRS delineation, leading to reduction in overall system complexity as no separate feature extraction stage would be required in the practical implementation of the system. Only 11 features are used for beat classification with the classification accuracy of approximately 99.5% through a KNN classifier. Another main advantage of this method is its robustness to noise, which is illustrated in this paper through experimental results. Furthermore, principal component analysis (PCA) has been used for feature reduction, which reduces the number of features from 11 to 6 while retaining the high beat classification accuracy. Due to reduction in computational complexity (using six features, the time required is approximately 4 ms per beat), a simple classifier and noise robustness (at 10 dB signal-to-noise ratio, accuracy is 95%), this method offers substantial advantages over previous techniques for implementation in a practical ECG analyzer.

  16. Accuracy of distance measurements in biplane angiography

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Oishi, Satoru; Koster, David; Schroth, Gerhard

    1997-05-01

    Distance measurements of the vascular system of the brain can be derived from biplanar digital subtraction angiography (2p-DSA). The measurements are used for planning of minimal invasive surgical procedures. Our 90 degree-fixed-angle G- ring angiography system has the potential of acquiring pairs of such images with high geometric accuracy. The sizes of vessels and aneurysms are estimated applying a fast and accurate extraction method in order to select an appropriate surgical strategy. Distance computation from 2p-DSA is carried out in three steps. First, the boundary of the structure to be measured is detected based on zero-crossings and closeness to user-specified end points. Subsequently, the 3D location of the center of the structure is computed from the centers of gravity of its two projections. This location is used to reverse the magnification factor caused by the cone-shaped projection of the x-rays. Since exact measurements of possibly very small structures are crucial to the usefulness in surgical planning, we identified mechanical and computational influences on the geometry which may have an impact on the measurement accuracy. A study with phantoms is presented distinguishing between the different effects and enabling the computation of an optimal overall exactness. Comparing this optimum with results of distance measurements on phantoms whose exact size and shape is known, we found, that the measurement error for structures of size of 20 mm was less than 0.05 mm on average and 0.50 mm at maximum. The maximum achievable accuracy of 0.15 mm was in most cases exceeded by less than 0.15 mm. This accuracy surpasses by far the requirements for the above mentioned surgery application. The mechanic accuracy of the fixed-angle biplanar system meets the requirements for computing a 3D reconstruction of the small vessels of the brain. It also indicates, that simple measurements will be possible on systems being less accurate.

  17. Three-dimensional robust diving guidance for hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Zhu, Jianwen; Liu, Luhua; Tang, Guojian; Bao, Weimin

    2016-01-01

    A novel three-dimensional robust guidance law based on H∞ filter and H∞ control is proposed to meet the constraints of the impact accuracy and the flight direction under process disturbances for the dive phase of hypersonic vehicle. Complete three-dimensional coupling relative motion equations are established and decoupled into linear ones by feedback linearization to simplify the design process of the further guidance law. Based on the linearized equations, H∞ filter is introduced to eliminate the measurement noises of line-of-sight angles and estimate the angular rates. Furthermore, H∞ robust control is well employed to design guidance law, and the filtered information is used to generate guidance commands to meet the guidance goal accurately and robustly. The simulation results of CAV-H indicate that the proposed three-dimensional equations can describe the coupling character more clearly than the traditional decoupling guidance, and the proposed guidance strategy can guide the vehicle to satisfy different multiple constraints with high accuracy and robustness.

  18. Strain-Dependent Transcriptome Signatures for Robustness in Lactococcus lactis

    PubMed Central

    Dijkstra, Annereinou R.; Alkema, Wynand; Starrenburg, Marjo J. C.; van Hijum, Sacha A. F. T.; Bron, Peter A.

    2016-01-01

    E and genes encoding transport proteins. The transcript levels of these genes can function as indicators of robustness and could aid in selection of fermentation parameters, potentially resulting in more optimal robustness during spray drying. PMID:27973578

  19. Strain-Dependent Transcriptome Signatures for Robustness in Lactococcus lactis.

    PubMed

    Dijkstra, Annereinou R; Alkema, Wynand; Starrenburg, Marjo J C; Hugenholtz, Jeroen; van Hijum, Sacha A F T; Bron, Peter A

    2016-01-01

    E and genes encoding transport proteins. The transcript levels of these genes can function as indicators of robustness and could aid in selection of fermentation parameters, potentially resulting in more optimal robustness during spray drying.

  20. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed Central

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B.; Hartz, Sarah M.; Johnson, Eric O.; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L.

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen’s kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  1. High Accuracy Time Transfer Synchronization

    DTIC Science & Technology

    1994-12-01

    HIGH ACCURACY TIME TRANSFER SYNCHRONIZATION Paul Wheeler, Paul Koppang, David Chalmers, Angela Davis, Anthony Kubik and William Powell U.S. Naval...Observatory Washington, DC 20392 Abstract In July 1994, the US Naval Observatory (USNO) Time Service System Engineering Division conducted a...field test to establish a baseline accuracy for two-way satellite time transfer synchro- nization. Three Hewlett-Packard model 5071 high performance

  2. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  3. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  4. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  5. A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

    PubMed Central

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589

  6. A robust linear feature-based procedure for automated registration of point clouds.

    PubMed

    Poreba, Martyna; Goulette, François

    2015-01-14

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%.

  7. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images.

    PubMed

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-07-05

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms.

  8. Spaceborne SAR data for global urban mapping at 30 m resolution using a robust urban extractor

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Jacob, Alexander; Gamba, Paolo

    2015-05-01

    With more than half of the world population now living in cities and 1.4 billion more people expected to move into cities by 2030, urban areas pose significant challenges on local, regional and global environment. Timely and accurate information on spatial distributions and temporal changes of urban areas are therefore needed to support sustainable development and environmental change research. The objective of this research is to evaluate spaceborne SAR data for improved global urban mapping using a robust processing chain, the KTH-Pavia Urban Extractor. The proposed processing chain includes urban extraction based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures, an existing method and several improvements i.e., SAR data preprocessing, enhancement, and post-processing. ENVISAT Advanced Synthetic Aperture Radar (ASAR) C-VV data at 30 m resolution were selected over 10 global cities and a rural area from six continents to demonstrate the robustness of the improved method. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from ENVISAT ASAR data and built-up areas can be mapped at 30 m resolution with very good accuracy using only one or two SAR images. These findings indicate that operational global urban mapping is possible with spaceborne SAR data, especially with the launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.

  9. Demographic corrections appear to compromise classification accuracy for severely skewed cognitive tests.

    PubMed

    O'Connell, Megan E; Tuokko, Holly; Kadlec, Helena

    2011-04-01

    Demographic corrections for cognitive tests should improve classification accuracy by reducing age or education biases, but empirical support has been equivocal. Using a simulation procedure, we show that creating moderate or extreme skewness in cognitive tests compromises the classification accuracy of demographic corrections, findings that appear replicated within clinical data for the few neuropsychological test scores with an extreme degree of skew. For most neuropsychological tests, the dementia classification accuracy of raw and demographically corrected scores was equivalent. These findings suggest that the dementia classification accuracy of demographic corrections is robust to slight degrees of skew (i.e., skewness <1.5).

  10. Robust Hitting with Dynamics Shaping

    NASA Astrophysics Data System (ADS)

    Yashima, Masahito; Yamawaki, Tasuku

    The present paper proposes the trajectory planning based on “the dynamics shaping” for a redundant robotic arm to hit a target robustly toward the desired direction, of which the concept is to shape the robot dynamics appropriately by changing its posture in order to achieve the robust motion. The positional error of the end-effector caused by unknown disturbances converges onto near the singular vector corresponding to its maximum singular value of the output controllability matrix of the robotic arm. Therefore, if we can control the direction of the singular vector by applying the dynamics shaping, we will be able to control the direction of the positional error of the end-effector caused by unknown disturbances. We propose a novel trajectory planning based on the dynamics shaping and verify numerically and experimentally that the robotic arm can robustly hit the target toward the desired direction with a simple open-loop control system even though the disturbance is applied.

  11. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators

    PubMed Central

    Kim, Pil-Jong; Kim, Hong-Gee

    2015-01-01

    Objectives The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). Materials and Methods The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Results Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Conclusions Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC. PMID:25984472

  12. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  13. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  14. Increase in error threshold for quasispecies by heterogeneous replication accuracy

    NASA Astrophysics Data System (ADS)

    Aoki, Kazuhiro; Furusawa, Mitsuru

    2003-09-01

    In this paper we investigate the error threshold for quasispecies with heterogeneous replication accuracy. We show that the coexistence of error-free and error-prone polymerases can greatly increase the error threshold without a catastrophic loss of genetic information. We also show that the error threshold is influenced by the number of replicores. Our research suggests that quasispecies with heterogeneous replication accuracy can reduce the genetic cost of selective evolution while still producing a variety of mutants.

  15. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  16. Mental Models: A Robust Definition

    ERIC Educational Resources Information Center

    Rook, Laura

    2013-01-01

    Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…

  17. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  18. Starfish: Robust spectroscopic inference tools

    NASA Astrophysics Data System (ADS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-05-01

    Starfish is a set of tools used for spectroscopic inference. It robustly determines stellar parameters using high resolution spectral models and uses Markov Chain Monte Carlo (MCMC) to explore the full posterior probability distribution of the stellar parameters. Additional potential applications include other types of spectra, such as unresolved stellar clusters or supernovae spectra.

  19. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  20. Pedometer accuracy in slow walking older adults

    PubMed Central

    Martin, Jessica B.; Krč, Katarina M.; Mitchell, Emily A.; Eng, Janice J.; Noble, Jeremy W.

    2013-01-01

    The purpose of this study was to determine pedometer accuracy during slow overground walking in older adults (Mean age = 63.6 years). A total of 18 participants (6 males, 12 females) wore 5 different brands of pedometers over 3 pre-set cadences that elicited walking speeds between 0.3 and 0.9 m/s and one self-selected cadence over 80 meters of indoor track. Pedometer accuracy decreased with slower walking speeds with mean percent errors across all devices combined of 56%, 40%, 19% and 9% at cadences of 50, 66, and 80 steps/min, and self selected cadence, respectively. Percent error ranged from 45.3% for Omron HJ105 to 66.9% for Yamax Digiwalker 200. Due to the high level of error across the slowest cadences of all 5 devices, the use of pedometers to monitor step counts in healthy older adults with slower gait speeds is problematic. Further research is required to develop pedometer mechanisms that accurately measure steps at slower walking speeds. PMID:24795762

  1. Camera Calibration Accuracy at Different Uav Flying Heights

    NASA Astrophysics Data System (ADS)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  2. Astronomic Position Accuracy Capability Study.

    DTIC Science & Technology

    1979-10-01

    portion of F. E. Warren AFB, Wyoming. The three points were called THEODORE ECC , TRACY, and JIM and consisted of metal tribrachs plastered to cinder...sets were computed as a deviation from the standard. Accuracy figures were determined from these residuals. Homo - geneity of variances was tested using

  3. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  4. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  5. Inventory accuracy in 60 days!

    PubMed

    Miller, G J

    1997-08-01

    Despite great advances in manufacturing technology and management science, thousands of organizations still don't have a handle on basic inventory accuracy. Many companies don't even measure it properly, or at all, and lack corrective action programs to improve it. This article offers an approach that has proven successful a number of times, when companies were quite serious about making improvements. Not only can it be implemented, but also it can likely be implemented within 60 days per area, if properly managed. The hardest part is selling people on the need to improve and then keeping them motivated. The net cost of such a program? Probably less than nothing, since the benefits gained usually far exceed the costs. Improved inventory accuracy can aid in enhancing customer service, determining purchasing and manufacturing priorities, reducing operating costs, and increasing the accuracy of financial records. This article also addresses the gap in contemporary literature regarding accuracy program features for repetitive, JIT, cellular, and process- and project-oriented environments.

  6. Are genetically robust regulatory networks dynamically different from random ones?

    NASA Astrophysics Data System (ADS)

    Sevim, Volkan; Rikvold, Per Arne

    We study a genetic regulatory network model developed to demonstrate that genetic robustness can evolve through stabilizing selection for optimal phenotypes. We report preliminary results on whether such selection could result in a reorganization of the state space of the system. For the chosen parameters, the evolution moves the system slightly toward the more ordered part of the phase diagram. We also find that strong memory effects cause the Derrida annealed approximation to give erroneous predictions about the model's phase diagram.

  7. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  8. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  9. Robust expertise effects in right FFA

    PubMed Central

    McGugin, Rankin Williams; Newton, Allen T; Gore, John C; Gauthier, Isabel

    2015-01-01

    The fusiform face area (FFA) is one of several areas in occipito-temporal cortex whose activity is correlated with perceptual expertise for objects. Here, we investigate the robustness of expertise effects in FFA and other areas to a strong task manipulation that increases both perceptual and attentional demands. With high-resolution fMRI at 7Telsa, we measured responses to images of cars, faces and a category globally visually similar to cars (sofas) in 26 subjects who varied in expertise with cars, in (a) a low load 1-back task with a single object category and (b) a high load task in which objects from two categories rapidly alternated and attention was required to both categories. The low load condition revealed several areas more active as a function of expertise, including both posterior and anterior portions of FFA bilaterally (FFA1/FFA2 respectively). Under high load, fewer areas were positively correlated with expertise and several areas were even negatively correlated, but the expertise effect in face-selective voxels in the anterior portion of FFA (FFA2) remained robust. Finally, we found that behavioral car expertise also predicted increased responses to sofa images but no behavioral advantages in sofa discrimination, suggesting that global shape similarity to a category of expertise is enough to elicit a response in FFA and other areas sensitive to experience, even when the category itself is not of special interest. The robustness of expertise effects in right FFA2 and the expertise effects driven by visual similarity both argue against attention being the sole determinant of expertise effects in extrastriate areas. PMID:25192631

  10. Robust video hashing via multilinear subspace projections.

    PubMed

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  11. Robust expertise effects in right FFA.

    PubMed

    McGugin, Rankin Williams; Newton, Allen T; Gore, John C; Gauthier, Isabel

    2014-10-01

    The fusiform face area (FFA) is one of several areas in occipito-temporal cortex whose activity is correlated with perceptual expertise for objects. Here, we investigate the robustness of expertise effects in FFA and other areas to a strong task manipulation that increases both perceptual and attentional demands. With high-resolution fMRI at 7T, we measured responses to images of cars, faces and a category globally visually similar to cars (sofas) in 26 subjects who varied in expertise with cars, in (a) a low load 1-back task with a single object category and (b) a high load task in which objects from two categories were rapidly alternated and attention was required to both categories. The low load condition revealed several areas more active as a function of expertise, including both posterior and anterior portions of FFA bilaterally (FFA1/FFA2, respectively). Under high load, fewer areas were positively correlated with expertise and several areas were even negatively correlated, but the expertise effect in face-selective voxels in the anterior portion of FFA (FFA2) remained robust. Finally, we found that behavioral car expertise also predicted increased responses to sofa images but no behavioral advantages in sofa discrimination, suggesting that global shape similarity to a category of expertise is enough to elicit a response in FFA and other areas sensitive to experience, even when the category itself is not of special interest. The robustness of expertise effects in right FFA2 and the expertise effects driven by visual similarity both argue against attention being the sole determinant of expertise effects in extrastriate areas.

  12. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    SciTech Connect

    Carrington, David Bradley; Waters, Jiajia

    2016-10-25

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  13. High accuracy in situ radiometric mapping.

    PubMed

    Tyler, Andrew N

    2004-01-01

    In situ and airborne gamma ray spectrometry have been shown to provide rapid and spatially representative estimates of environmental radioactivity across a range of landscapes. However, one of the principal limitations of this technique has been the influence of changes in the vertical distribution of the source (e.g. 137Cs) on the observed photon fluence resulting in a significant reduction in the accuracy of the in situ activity measurement. A flexible approach for single gamma photon emitting radionuclides is presented, which relies on the quantification of forward scattering (or valley region between the full energy peak and Compton edge) within the gamma ray spectrum to compensate for changes in the 137Cs vertical activity distribution. This novel in situ method lends itself to the mapping of activity concentrations in environments that exhibit systematic changes in the vertical activity distribution. The robustness of this approach has been demonstrated in a salt marsh environment on the Solway coast, SW Scotland, with both a 7.6 cm x 7.6 cm NaI(Tl) detector and a 35% n-type HPGe detector. Application to ploughed field environments has also been demonstrated using HPGe detector, including its application to the estimation of field moist bulk density and soil erosion measurement. Ongoing research work is also outlined.

  14. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  15. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  16. Robust on-off pulse control of flexible space vehicles

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi

    1993-01-01

    The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.

  17. Towards robust compressed-domain video watermarking for H.264

    NASA Astrophysics Data System (ADS)

    Noorkami, Maneli; Mersereau, Russell M.

    2006-02-01

    As H.264 digital video becomes more prevalent, the industry needs copyright protection and authentication methods that are appropriate for this standard. The goal of this paper is to propose a robust watermarking algorithm for H.264. To achieve this goal, we employ a human visual model adapted for a 4x4 DCT block to obtain a larger payload and a greater robustness while minimizing visual distortion. We use a key-dependent algorithm to select a subset of the coefficients with visual watermarking capacity for watermark embedding to obtain robustness to malicious attacks. Furthermore, we spread the watermark over frequencies and within blocks to avoid error pooling. The error pooling effect, introduced by Watson, has not been considered in previous perceptual watermarking algorithms. Our simulation results show that we can increase the payload and robustness without a noticeable change in perceptual quality by reducing this effect. We embed the watermark in the residuals to avoid decompressing the video, and to reduce the complexity of the watermarking algorithm. However, we extract the watermark from the decoded video sequence to make the algorithm robust to intraprediction mode changes. Our simulation results shows that we obtain robustness to filtering, 50% cropping, and requantization attacks.

  18. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  19. Origin of robustness in generating drug-resistant malaria parasites.

    PubMed

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W; Kamchonwongpaisan, Sumalee; Fidock, David A; Kirkman, Laura A; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-07-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs.

  20. Investigation of Adaptive Robust Kalman Filtering Algorithms for GPS/DR Navigation System Filters

    NASA Astrophysics Data System (ADS)

    Elzoghby, MOSTAFA; Arif, USMAN; Li, FU; Zhi Yu, XI

    2017-03-01

    The conventional Kalman filter (KF) algorithm is suitable if the characteristic noise covariance for states as well as measurements is readily known but in most cases these are unknown. Similarly robustness is required instead of smoothing if states are changing abruptly. Such an adaptive as well as robust Kalman filter is vital for many real time applications, like target tracking and navigating aerial vehicles. A number of adaptive as well as robust Kalman filtering methods are available in the literature. In order to investigate the performance of some of these methods, we have selected three different Kalman filters, namely Sage Husa KF, Modified Adaptive Robust KF and Adaptively Robust KF, which are easily simulate able as well as implementable for real time applications. These methods are simulated for land based vehicle and the results are compared with conventional Kalman filter. Results show that the Modified Adaptive Robust KF is best amongst the selected methods and can be used for Navigation applications.

  1. Robust tumor morphometry in multispectral fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Tabesh, Ali; Vengrenyuk, Yevgen; Teverovskiy, Mikhail; Khan, Faisal M.; Sapir, Marina; Powell, Douglas; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo

    2009-02-01

    Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of 1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for predicting cancer recurrence (p <= 0.0001). In multivariate analysis, an MST feature was selected for a model incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set, which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.

  2. Evaluation of selection index: application to the choice of an indirect multitrait selection index for soybean breeding.

    PubMed

    Bouchez, A; Goffinet, B

    1990-02-01

    Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information.

  3. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    release and sale; its distribution is unlimited Acceso For NTIS GRA&I DEPARTMENT OF STATISTICS DTIC TAB Unannounced Southern Methodist University F...such as the Cauchy where A is a constant multiple of the identity. In fact, for the Cauchy distribution asymptotically optimal spacing sequences for

  4. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  5. Robust background modelling in DIALS

    PubMed Central

    Parkhurst, James M.; Winter, Graeme; Waterman, David G.; Fuentes-Montero, Luis; Gildea, Richard J.; Murshudov, Garib N.; Evans, Gwyndaf

    2016-01-01

    A method for estimating the background under each reflection during integration that is robust in the presence of pixel outliers is presented. The method uses a generalized linear model approach that is more appropriate for use with Poisson distributed data than traditional approaches to pixel outlier handling in integration programs. The algorithm is most applicable to data with a very low background level where assumptions of a normal distribution are no longer valid as an approximation to the Poisson distribution. It is shown that traditional methods can result in the systematic underestimation of background values. This then results in the reflection intensities being overestimated and gives rise to a change in the overall distribution of reflection intensities in a dataset such that too few weak reflections appear to be recorded. Statistical tests performed during data reduction may mistakenly attribute this to merohedral twinning in the crystal. Application of the robust generalized linear model algorithm is shown to correct for this bias. PMID:27980508

  6. A Robust Streaming Media System

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    Presently, application layer multicast protocols (ALM) are proposed as substitute for IP multicast and have made extraordinary achievements. Integrated with Multi-data-stream mode such as Multiple Description Coding (MDC), ALM becomes more scalable and robust in high-dynamic Internet environment compared with single data stream. Although MDC can provide a flexible data transmission style, the synchronization of different descriptions encoded from one video source is proved to be difficult due to different delay on diverse transmission paths. In this paper, an ALM system called HMDC is proposed to improve accepted video quality of streaming media, hosts can join the separate overlay trees in different layers simultaneously, then the maximum synchronized descriptions of the same layer are worked out to acquire the best video quality. Simulations implemented on Internet-like topology indicate that HMDC achieves better video quality, lower link stress, higher robustness and comparable latency compared with traditional ALM protocols.

  7. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  8. Efficient and Robust Signal Approximations

    DTIC Science & Technology

    2009-05-01

    otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse

  9. Robust flight control of rotorcraft

    NASA Astrophysics Data System (ADS)

    Pechner, Adam Daniel

    With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.

  10. Accuracy of implant impression techniques.

    PubMed

    Assif, D; Marshak, B; Schmidt, A

    1996-01-01

    Three impression techniques were assessed for accuracy in a laboratory cast that simulated clinical practice. The first technique used autopolymerizing acrylic resin to splint the transfer copings. The second involved splinting of the transfer copings directly to an acrylic resin custom tray. In the third, only impression material was used to orient the transfer copings. The accuracy of stone casts with implant analogs was measured against a master framework. The fit of the framework on the casts was tested using strain gauges. The technique using acrylic resin to splint transfer copings in the impression material was significantly more accurate than the two other techniques. Stresses observed in the framework are described and discussed with suggestions to improve clinical and laboratory techniques.

  11. Objective analysis of the Gulf Stream thermal front: methods and accuracy. Technical report

    SciTech Connect

    Tracey, K.L.; Friedlander, A.I.; Watts, R.

    1987-12-01

    The objective-analysis (OA) technique was adapted by Watts and Tracey in order to map the thermal frontal zone of the Gulf Stream. Here, the authors test the robustness of the adapted OA technique to the selection of four control parameters: mean field, standard deviation field, correlation function, and decimation time. Output OA maps of the thermocline depth are most affected by the choice of mean field, with the most-realistic results produced using a time-averaged mean. The choice of the space-time correlation function has a large influence on the size of the estimated error fields, which are associated with the OA maps. The smallest errors occur using the analytic function based on 4 years of inverted echo sounder data collected in the same region of the Gulf Stream. Variations in the selection of the standard deviation field and decimation time have little effect on the output OA maps. Accuracy of the output OA maps is determined by comparing them with independent measurements of the thermal field. Two cases are evaluated: standard maps and high-temporal-resolution maps, with decimation times of 2 days and 1 day, respectively. Standard deviations (STD) between the standard maps at the 15% estimated error level and the XBTs (AXBTs) are determined to be 47-53 m. Comparisons of the high-temporal-resolution maps at the 20% error level with the XBTs (AXBTs) give STD differences of 47 m.

  12. A Gossip-based Energy Efficient Protocol for Robust In-network Aggregation in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Fauji, Shantanu

    We consider the problem of energy efficient and fault tolerant in--network aggregation for wireless sensor networks (WSNs). In-network aggregation is the process of aggregation while collecting data from sensors to the base station. This process should be energy efficient due to the limited energy at the sensors and tolerant to the high failure rates common in sensor networks. Tree based in--network aggregation protocols, although energy efficient, are not robust to network failures. Multipath routing protocols are robust to failures to a certain degree but are not energy efficient due to the overhead in the maintenance of multiple paths. We propose a new protocol for in-network aggregation in WSNs, which is energy efficient, achieves high lifetime, and is robust to the changes in the network topology. Our protocol, gossip--based protocol for in-network aggregation (GPIA) is based on the spreading of information via gossip. GPIA is not only adaptive to failures and changes in the network topology, but is also energy efficient. Energy efficiency of GPIA comes from all the nodes being capable of selective message reception and detecting convergence of the aggregation early. We experimentally show that GPIA provides significant improvement over some other competitors like the Ridesharing, Synopsis Diffusion and the pure version of gossip. GPIA shows ten fold, five fold and two fold improvement over the pure gossip, the synopsis diffusion and Ridesharing protocols in terms of network lifetime, respectively. Further, GPIA retains gossip's robustness to failures and improves upon the accuracy of synopsis diffusion and Ridesharing.

  13. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  14. A comparison of various optimization algorithms of protein-ligand docking programs by fitness accuracy.

    PubMed

    Guo, Liyong; Yan, Zhiqiang; Zheng, Xiliang; Hu, Liang; Yang, Yongliang; Wang, Jin

    2014-07-01

    In protein-ligand docking, an optimization algorithm is used to find the best binding pose of a ligand against a protein target. This algorithm plays a vital role in determining the docking accuracy. To evaluate the relative performance of different optimization algorithms and provide guidance for real applications, we performed a comparative study on six efficient optimization algorithms, containing two evolutionary algorithm (EA)-based optimizers (LGA, DockDE) and four particle swarm optimization (PSO)-based optimizers (SODock, varCPSO, varCPSO-ls, FIPSDock), which were implemented into the protein-ligand docking program AutoDock. We unified the objective functions by applying the same scoring function, and built a new fitness accuracy as the evaluation criterion that incorporates optimization accuracy, robustness, and efficiency. The varCPSO and varCPSO-ls algorithms show high efficiency with fast convergence speed. However, their accuracy is not optimal, as they cannot reach very low energies. SODock has the highest accuracy and robustness. In addition, SODock shows good performance in efficiency when optimizing drug-like ligands with less than ten rotatable bonds. FIPSDock shows excellent robustness and is close to SODock in accuracy and efficiency. In general, the four PSO-based algorithms show superior performance than the two EA-based algorithms, especially for highly flexible ligands. Our method can be regarded as a reference for the validation of new optimization algorithms in protein-ligand docking.

  15. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  16. Robustness of Massively Parallel Sequencing Platforms

    PubMed Central

    Kavak, Pınar; Yüksel, Bayram; Aksu, Soner; Kulekci, M. Oguzhan; Güngör, Tunga; Hach, Faraz; Şahinalp, S. Cenk; Alkan, Can; Sağıroğlu, Mahmut Şamil

    2015-01-01

    The improvements in high throughput sequencing technologies (HTS) made clinical sequencing projects such as ClinSeq and Genomics England feasible. Although there are significant improvements in accuracy and reproducibility of HTS based analyses, the usability of these types of data for diagnostic and prognostic applications necessitates a near perfect data generation. To assess the usability of a widely used HTS platform for accurate and reproducible clinical applications in terms of robustness, we generated whole genome shotgun (WGS) sequence data from the genomes of two human individuals in two different genome sequencing centers. After analyzing the data to characterize SNPs and indels using the same tools (BWA, SAMtools, and GATK), we observed significant number of discrepancies in the call sets. As expected, the most of the disagreements between the call sets were found within genomic regions containing common repeats and segmental duplications, albeit only a small fraction of the discordant variants were within the exons and other functionally relevant regions such as promoters. We conclude that although HTS platforms are sufficiently powerful for providing data for first-pass clinical tests, the variant predictions still need to be confirmed using orthogonal methods before using in clinical applications. PMID:26382624

  17. Robust defect segmentation in woven fabrics

    SciTech Connect

    Sari-Sarraf, H.; Goddard, J.S. Jr.

    1997-12-01

    This paper describes a robust segmentation algorithm for the detection and localization of woven fabric defects. The essence of the presented segmentation algorithm is the localization of those events (i.e., defects) in the input images that disrupt the global homogeneity of the background texture. To this end, preprocessing modules, based on the wavelet transform and edge fusion, are employed with the objective of attenuating the background texture and accentuating the defects. Then, texture features are utilized to measure the global homogeneity of the output images. If these images are deemed to be globally nonhomogeneous (i.e., defects are present), a local roughness measure is used to localize the defects. The utility of this algorithm can be extended beyond the specific application in this work, that is, defect segmentation in woven fabrics. Indeed, in a general sense, this algorithm can be used to detect and to localize anomalies that reside in images characterized by ordered texture. The efficacy of this algorithm has been tested thoroughly under realistic conditions and as a part of an on-line fabric inspection system. Using over 3700 images of fabrics, containing 26 different types of defects, the overall detection rate of this approach was 89% with a localization accuracy of less than 0.2 inches and a false alarm rate of 2.5%.

  18. Optimal robust motion controller design using multiobjective genetic algorithm.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution.

  19. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  20. Cost and accuracy of advanced breeding trial designs in apple

    PubMed Central

    Harshman, Julia M; Evans, Kate M; Hardner, Craig M

    2016-01-01

    Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717

  1. Genotype by environment interaction and breeding for robustness in livestock

    PubMed Central

    Rauw, Wendy M.; Gomez-Raya, Luis

    2015-01-01

    The increasing size of the human population is projected to result in an increase in meat consumption. However, at the same time, the dominant position of meat as the center of meals is on the decline. Modern objections to the consumption of meat include public concerns with animal welfare in livestock production systems. Animal breeding practices have become part of the debate since it became recognized that animals in a population that have been selected for high production efficiency are more at risk for behavioral, physiological and immunological problems. As a solution, animal breeding practices need to include selection for robustness traits, which can be implemented through the use of reaction norms analysis, or though the direct inclusion of robustness traits in the breeding objective and in the selection index. This review gives an overview of genotype × environment interactions (the influence of the environment, reaction norms, phenotypic plasticity, canalization, and genetic homeostasis), reaction norms analysis in livestock production, options for selection for increased levels of production and against environmental sensitivity, and direct inclusion of robustness traits in the selection index. Ethical considerations of breeding for improved animal welfare are discussed. The discussion on animal breeding practices has been initiated and is very alive today. This positive trend is part of the sustainable food production movement that aims at feeding 9.15 billion people not just in the near future but also beyond. PMID:26539207

  2. Accuracy of flow hoods in residential applications

    SciTech Connect

    Wray, Craig P.; Walker, Iain S.; Sherman, Max H.

    2002-05-01

    To assess whether houses can meet performance expectations, the new practice of residential commissioning will likely use flow hoods to measure supply and return grille airflows in HVAC systems. Depending on hood accuracy, these measurements can be used to determine if individual rooms receive adequate airflow for heating and cooling, to determine flow imbalances between different building spaces, to estimate total air handler flow and supply/return imbalances, and to assess duct air leakage. This paper discusses these flow hood applications and the accuracy requirements in each case. Laboratory tests of several residential flow hoods showed that these hoods can be inadequate to measure flows in residential systems. Potential errors are about 20% to 30% of measured flow, due to poor calibrations, sensitivity to grille flow non-uniformities, and flow changes from added flow resistance. Active flow hoods equipped with measurement devices that are insensitive to grille airflow patterns have an order of magnitude less error, and are more reliable and consistent in most cases. Our tests also show that current calibration procedures for flow hoods do not account for field application problems. As a result, a new standard for flow hood calibration needs to be developed, along with a new measurement standard to address field use of flow hoods. Lastly, field evaluation of a selection of flow hoods showed that it is possible to obtain reasonable results using some flow hoods if the field tests are carefully done, the grilles are appropriate, and grille location does not restrict flow hood placement.

  3. Accuracy of an earpiece face-bow.

    PubMed

    Palik, J F; Nelson, D R; White, J T

    1985-06-01

    The validity of the Hanau ear-bow to transfer an arbitrary hinge axis to a Hanau articulator was clinically compared with a Hanau kinematic face-bow. The study was conducted with 18 randomly selected patients. This investigation demonstrated a significant statistical difference between the arbitrary axis located with an ear-bow and the terminal hinge axis. This discrepancy was significant in the anteroposterior direction but not in the superior-inferior direction. Only 50% of the arbitrary hinge axes were within a 5 mm radius of the terminal hinge axis, while 89% were within a 6 mm radius. Furthermore, the ear-bow method was not repeatable statistically. Additional study is needed to determine the practical value of the arbitrary face-bow and to pursue modifications to improve its accuracy.

  4. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  5. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  6. Accuracy of the vivofit activity tracker.

    PubMed

    Alsubheen, Sana'a A; George, Amanda M; Baker, Alicia; Rohr, Linda E; Basset, Fabien A

    2016-08-01

    The purpose of this study was to examine the accuracy of the vivofit activity tracker in assessing energy expenditure and step count. Thirteen participants wore the vivofit activity tracker for five days. Participants were required to independently perform 1 h of self-selected activity each day of the study. On day four, participants came to the lab to undergo BMR and a treadmill-walking task (TWT). On day five, participants completed 1 h of office-type activities. BMR values estimated by the vivofit were not significantly different from the values measured through indirect calorimetry (IC). The vivofit significantly underestimated EE for treadmill walking, but responded to the differences in the inclination. Vivofit underestimated step count for level walking but provided an accurate estimate for incline walking. There was a strong correlation between EE and the exercise intensity. The vivofit activity tracker is on par with similar devices and can be used to track physical activity.

  7. MicroRNA precursors are not structurally robust but plastic.

    PubMed

    Rodrigo, Guillermo; Elena, Santiago F

    2013-01-01

    Robustness is considered a ubiquitous property of living systems at all levels of organization, and small noncoding RNA (sncRNA) is a genuine model for its study at the molecular level. In this communication, we question whether microRNA precursors (pre-miRNAs) are actually structurally robust, as previously suggested. We found that natural pre-miRNAs are not more robust than expected under an appropriate null model. On the contrary, we found that eukaryotic pre-miRNAs show a significant enrichment in conformational flexibility at the thermal equilibrium of the molecule, that is, in their plasticity. Our results further support the selection for functional diversification and evolvability in sncRNAs.

  8. Inductive robust principal component analysis.

    PubMed

    Bao, Bing-Kun; Liu, Guangcan; Xu, Changsheng; Yan, Shuicheng

    2012-08-01

    In this paper we address the error correction problem that is to uncover the low-dimensional subspace structure from high-dimensional observations, which are possibly corrupted by errors. When the errors are of Gaussian distribution, Principal Component Analysis (PCA) can find the optimal (in terms of least-square-error) low-rank approximation to highdimensional data. However, the canonical PCA method is known to be extremely fragile to the presence of gross corruptions. Recently, Wright et al. established a so-called Robust Principal Component Analysis (RPCA) method, which can well handle grossly corrupted data [14]. However, RPCA is a transductive method and does not handle well the new samples which are not involved in the training procedure. Given a new datum, RPCA essentially needs to recalculate over all the data, resulting in high computational cost. So, RPCA is inappropriate for the applications that require fast online computation. To overcome this limitation, in this paper we propose an Inductive Robust Principal Component Analysis (IRPCA) method. Given a set of training data, unlike RPCA that targets on recovering the original data matrix, IRPCA aims at learning the underlying projection matrix, which can be used to efficiently remove the possible corruptions in any datum. The learning is done by solving a nuclear norm regularized minimization problem, which is convex and can be solved in polynomial time. Extensive experiments on a benchmark human face dataset and two video surveillance datasets show that IRPCA can not only be robust to gross corruptions, but also handle well the new data in an efficient way.

  9. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  10. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  11. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  12. Robust Soldier Crab Ball Gate

    NASA Astrophysics Data System (ADS)

    Gunji, Yukio-Pegio; Nishiyama, Yuta; Adamatzky, Andrew

    2011-09-01

    Based on the field observation of soldier crabs, we previously proposed a model for a swarm of soldier crabs. Here, we describe the interaction of coherent swarms in the simulation model, which is implemented in a logical gate. Because a swarm is generated by inherent perturbation, a swarm can be generated and maintained under highly perturbed conditions. Thus, the model reveals a robust logical gate rather than stable one. In addition, we show that the logical gate of swarms is also implemented by real soldier crabs (Mictyris guinotae).

  13. Recent Progress toward Robust Photocathodes

    SciTech Connect

    Mulhollan, G. A.; Bierman, J. C.

    2009-08-04

    RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.

  14. A rapid algorithm for robust and automatic extraction of the midsagittal plane of the human cerebrum from neuroimages based on local symmetry and outlier removal.

    PubMed

    Hu, Qingmao; Nowinski, Wieslaw L

    2003-12-01

    A rapid algorithm for robust, accurate, and automatic extraction of the midsagittal plane (MSP) of the human cerebrum from normal and pathological neuroimages is proposed. The MSP is defined as a plane formed from the interhemispheric fissure line segments having the dominant orientation. The algorithm extracts the MSP in four steps: (1) determine suitable axial slices for processing, (2) localize the fissure line segments on them, (3) select inliers from the extracted fissure line segments through histogram-based outlier removal, and (4) calculate the equation of the MSP from the selected inliers. The fissure line segments are localized by minimizing the local symmetry index characterizing anatomical properties of images in the vicinity of the interhemispheric fissure. A two-stage angular and distance outlier removal is introduced to handle abnormalities. The algorithm has been validated quantitatively with 125 structural MRI and CT cases from 10 centers on three continents by studying its accuracy; tolerance to rotation, noise, asymmetry, and bias field; sensitivity to parameters; and performance. A statistical relationship between algorithm accuracy and the data's adherence to planarity is also determined. The algorithm extracts the MSP below 6 s on Pentium 4 (2.4 GHz) with the average angular and distance errors of (0.40 degrees; 0.63 mm) for normal and (0.59 degrees; 0.73 mm) for pathological cases. The robustness to noise, asymmetry, rotation, and bias field is achieved by extracting the MSP based on the dominant orientation and local symmetry index. A low computational cost results from applying simple operations capturing intrinsic anatomic features, constraining the searching space to the local vicinity of the interhemispheric fissure, and formulating a noniterative algorithm with a coarse and fine fixed-step searching. In comparison to the existing methods, our algorithm is much faster, performs accurately and robustly for a wide range of diversified data

  15. Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text

    ERIC Educational Resources Information Center

    Reid, Alan J.; Morrison, Gary R.; Bol, Linda

    2017-01-01

    This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…

  16. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  17. Robust Microbiota-Based Diagnostics for Inflammatory Bowel Disease.

    PubMed

    Eck, A; de Groot, E F J; de Meij, T G J; Welling, M; Savelkoul, P H M; Budding, A E

    2017-03-22

    Strong evidence suggests that gut microbiota is altered in inflammatory bowel disease (IBD), indicating its potential role in non-invasive diagnostics. However, no clinical applications are currently used for routine patient care. The main obstacle to implementing a gut microbiota test for IBD is the lack of standardization, which leads to high inter-laboratory variations. We studied the between-hospital and between-platform batch effects and their effect on predictive accuracy for IBD.Fecal samples from 91 pediatric IBD patients and 58 healthy children were collected. IS-pro, a standardized technique designed for routine microbiota profiling in clinical settings, was used for microbiota composition characterization. Additionally, a large synthetic dataset was used to simulate various perturbations and study their effect on the accuracy of different classifiers. Perturbations were validated in two replicate datasets: one processed in another laboratory and the other using a different analysis platform.Type of perturbation determined its effect on predictive accuracy. Real-life perturbations induced by between-platform variation were significantly higher than those caused by between-laboratory variation. Random forest was found to be robust to both simulated and observed perturbations, even when these perturbations had a dramatic effect on other classifiers. It achieved high accuracy both when cross-validated within the same dataset, and using datasets analyzed in different laboratories.Robust clinical predictions based on gut microbiota can be performed, even when samples are processed in different hospitals. This study contributes to the efforts taken towards development of a universal IBD test that would enable simple diagnostics and disease activity monitoring.

  18. Attack robustness of cascading load model in interdependent networks

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Wu, Yuedan; Li, Yun

    2015-08-01

    Considering the weight of a node and the coupled strength of two interdependent nodes in the different networks, we propose a method to assign the initial load of a node and construct a new cascading load model in the interdependent networks. Assuming that a node in one network will fail if its degree is 0 or its dependent node in the other network is removed from the network or the load on it exceeds its capacity, we study the influences of the assortative link (AL) and the disassortative link (DL) patterns between two networks on the robustness of the interdependent networks against cascading failures. For better evaluating the network robustness, from the local perspective of a node we present a new measure to qualify the network resiliency after targeted attacks. We show that the AL patterns between two networks can improve the robust level of the entire interdependent networks. Moreover, we obtain how to efficiently allocate the initial load and select some nodes to be protected so as to maximize the network robustness against cascading failures. In addition, we find that some nodes with the lower load are more likely to trigger the cascading propagation when the distribution of the load is more even, and also give the reasonable explanation. Our findings can help to design the robust interdependent networks and give the reasonable suggestion to optimize the allocation of the protection resources.

  19. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  20. Robust Inflation from fibrous strings

    SciTech Connect

    Burgess, C.P.; Cicoli, M.; Alwis, S. de; Quevedo, F.

    2016-05-13

    Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models — including the key (though often ignored) issue of modulus stabilisation — to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index n{sub s} of the form r∝(n{sub s}−1){sup 2} where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r≲0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.

  1. Bayesian robust principal component analysis.

    PubMed

    Ding, Xinghao; He, Lihan; Carin, Lawrence

    2011-12-01

    A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.

  2. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  3. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  4. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  5. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  6. Mechanisms of mutational robustness in transcriptional regulation

    PubMed Central

    Payne, Joshua L.; Wagner, Andreas

    2015-01-01

    Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194

  7. Accuracy of perturbative master equations.

    PubMed

    Fleming, C H; Cummings, N I

    2011-03-01

    We consider open quantum systems with dynamics described by master equations that have perturbative expansions in the system-environment interaction. We show that, contrary to intuition, full-time solutions of order-2n accuracy require an order-(2n+2) master equation. We give two examples of such inaccuracies in the solutions to an order-2n master equation: order-2n inaccuracies in the steady state of the system and order-2n positivity violations. We show how these arise in a specific example for which exact solutions are available. This result has a wide-ranging impact on the validity of coupling (or friction) sensitive results derived from second-order convolutionless, Nakajima-Zwanzig, Redfield, and Born-Markov master equations.

  8. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  9. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  10. Adaptive robust image registration approach based on adequately sampling polar transform and weighted angular projection function

    NASA Astrophysics Data System (ADS)

    Wei, Zhao; Tao, Feng; Jun, Wang

    2013-10-01

    An efficient, robust, and accurate approach is developed for image registration, which is especially suitable for large-scale change and arbitrary rotation. It is named the adequately sampling polar transform and weighted angular projection function (ASPT-WAPF). The proposed ASPT model overcomes the oversampling problem of conventional log-polar transform. Additionally, the WAPF presented as the feature descriptor is robust to the alteration in the fovea area of an image, and reduces the computational cost of the following registration process. The experimental results show two major advantages of the proposed method. First, it can register images with high accuracy even when the scale factor is up to 10 and the rotation angle is arbitrary. However, the maximum scaling estimated by the state-of-the-art algorithms is 6. Second, our algorithm is more robust to the size of the sampling region while not decreasing the accuracy of the registration.

  11. Simple and robust methods for remote sensing of canopy chlorophyll content: a comparative analysis of hyperspectral data for different types of vegetation.

    PubMed

    Inoue, Yoshio; Guérif, Martine; Baret, Frédéric; Skidmore, Andrew; Gitelson, Anatoly; Schlerf, Martin; Darvishzadeh, Roshanak; Olioso, Albert

    2016-12-01

    Canopy chlorophyll content (CCC) is an essential ecophysiological variable for photosynthetic functioning. Remote sensing of CCC is vital for a wide range of ecological and agricultural applications. The objectives of this study were to explore simple and robust algorithms for spectral assessment of CCC. Hyperspectral datasets for six vegetation types (rice, wheat, corn, soybean, sugar beet and natural grass) acquired in four locations (Japan, France, Italy and USA) were analysed. To explore the best predictive model, spectral index approaches using the entire wavebands and multivariable regression approaches were employed. The comprehensive analysis elucidated the accuracy, linearity, sensitivity and applicability of various spectral models. Multivariable regression models using many wavebands proved inferior in applicability to different datasets. A simple model using the ratio spectral index (RSI; R815, R704) with the reflectance at 815 and 704 nm showed the highest accuracy and applicability. Simulation analysis using a physically based reflectance model suggested the biophysical soundness of the results. The model would work as a robust algorithm for canopy-chlorophyll-metre and/or remote sensing of CCC in ecosystem and regional scales. The predictive-ability maps using hyperspectral data allow not only evaluation of the relative significance of wavebands in various sensors but also selection of the optimal wavelengths and effective bandwidths.

  12. Comparing the accuracy of quantitative versus qualitative analyses of interim PET to prognosticate Hodgkin lymphoma: a systematic review protocol of diagnostic test accuracy

    PubMed Central

    Procházka, Vít; Klugar, Miloslav; Bachanova, Veronika; Klugarová, Jitka; Tučková, Dagmar; Papajík, Tomáš

    2016-01-01

    Introduction Hodgkin lymphoma is an effectively treated malignancy, yet 20% of patients relapse or are refractory to front-line treatments with potentially fatal outcomes. Early detection of poor treatment responders is crucial for appropriate application of tailored treatment strategies. Tumour metabolic imaging of Hodgkin lymphoma using visual (qualitative) 18-fluorodeoxyglucose positron emission tomography (FDG-PET) is a gold standard for staging and final outcome assessment, but results gathered during the interim period are less accurate. Analysis of continuous metabolic–morphological data (quantitative) FDG-PET may enhance the robustness of interim disease monitoring, and help to improve treatment decision-making processes. The objective of this review is to compare diagnostic test accuracy of quantitative versus qualitative interim FDG-PET in the prognostication of patients with Hodgkin lymphoma. Methods The literature on this topic will be reviewed in a 3-step strategy that follows methods described by the Joanna Briggs Institute (JBI). First, MEDLINE and EMBASE databases will be searched. Second, listed databases for published literature (MEDLINE, Tripdatabase, Pedro, EMBASE, the Cochrane Central Register of Controlled Trials and WoS) and unpublished literature (Open Grey, Current Controlled Trials, MedNar, ClinicalTrials.gov, Cos Conference Papers Index and International Clinical Trials Registry Platform of the WHO) will be queried. Third, 2 independent reviewers will analyse titles, abstracts and full texts, and perform hand search of relevant studies, and then perform critical appraisal and data extraction from selected studies using the DATARI tool (JBI). If possible, a statistical meta-analysis will be performed on pooled sensitivity and specificity data gathered from the selected studies. Statistical heterogeneity will be assessed. Funnel plots, Begg's rank correlations and Egger's regression tests will be used to detect and/or correct publication

  13. The structure of robust observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1975-01-01

    Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.

  14. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  15. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  16. Advances in robust flight design

    NASA Technical Reports Server (NTRS)

    Wong, Kelvin K.; Dhand, Sanjeev K.

    1991-01-01

    Current launch vehicle trajectory design philosophies, generally based on maximizing payload capability, result in an expensive and time-consuming iteration in trajectory design for each mission. However, for a launch system that is not performance-driven, a flight design that is robust to variations in missions and provides single-engine-out capability can be highly cost-effective. This philosophy has led to the development of two flight design concepts to reduce recurring costs: standard trajectories and command multiplier steering. Preliminary analyses of these two concepts had proven the feasibility and showed encouraging results in applications to an Advanced Launch System vehicle. Recent progress has demonstrated the effective and efficient integration of the two concepts with minimal payload penalty.

  17. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration.

  18. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  19. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  20. Robust stochastic mine production scheduling

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa

    2010-06-01

    The production scheduling of open pit mines aims to determine the extraction sequence of blocks such that the net present value (NPV) of a mining project is maximized under capacity and access constraints. This sequencing has significant effect on the profitability of the mining venture. However, given that the values of coefficients in the optimization procedure are obtained in a medium of sparse data and unknown future events, implementations based on deterministic models may lead to destructive consequences to the company. In this article, a robust stochastic optimization (RSO) approach is used to deal with mine production scheduling in a manner such that the solution is insensitive to changes in input data. The approach seeks a trade off between optimality and feasibility. The model is demonstrated on a case study. The findings showed that the approach can be used in mine production scheduling problems efficiently.

  1. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  2. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: II. Probabilistic Guarantees on Constraint Satisfaction

    PubMed Central

    Li, Zukui; Floudas, Christodoulos A.

    2012-01-01

    Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868

  3. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  4. Improving hyperspectral band selection by constructing an estimated reference map

    NASA Astrophysics Data System (ADS)

    Guo, Baofeng; Damper, Robert I.; Gunn, Steve R.; Nelson, James D. B.

    2014-01-01

    We investigate band selection for hyperspectral image classification. Mutual information (MI) measures the statistical dependence between two random variables. By modeling the reference map as one of the two random variables, MI can, therefore, be used to select the bands that are more useful for image classification. A new method is proposed to estimate the MI using an optimally constructed reference map, reducing reliance on ground-truth information. To reduce the interferences from noise and clutters, the reference map is constructed by averaging a subset of spectral bands that are chosen with the best capability to approximate the ground truth. To automatically find these bands, we develop a searching strategy consisting of differentiable MI, gradient ascending algorithm, and random-start optimization. Experiments on AVIRIS 92AV3C dataset and Pavia University scene dataset show that the proposed method outperformed the benchmark methods. In AVIRIS 92AV3C dataset, up to 55% of bands can be removed without significant loss of classification accuracy, compared to the 40% from that using the reference map accompanied with the dataset. Meanwhile, its performance is much more robust to accuracy degradation when bands are cut off beyond 60%, revealing a better agreement in the MI calculation. In Pavia University scene dataset, using 45 bands achieved 86.18% classification accuracy, which is only 1.5% lower than that using all the 103 bands.

  5. Robust finger vein ROI localization based on flexible segmentation.

    PubMed

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  6. Measuring the robustness of link prediction algorithms under noisy environment

    PubMed Central

    Zhang, Peng; Wang, Xiang; Wang, Futian; Zeng, An; Xiao, Jinghua

    2016-01-01

    Link prediction in complex networks is to estimate the likelihood of two nodes to interact with each other in the future. As this problem has applications in a large number of real systems, many link prediction methods have been proposed. However, the validation of these methods is so far mainly conducted in the assumed noise-free networks. Therefore, we still miss a clear understanding of how the prediction results would be affected if the observed network data is no longer accurate. In this paper, we comprehensively study the robustness of the existing link prediction algorithms in the real networks where some links are missing, fake or swapped with other links. We find that missing links are more destructive than fake and swapped links for prediction accuracy. An index is proposed to quantify the robustness of the link prediction methods. Among the twenty-two studied link prediction methods, we find that though some methods have low prediction accuracy, they tend to perform reliably in the “noisy” environment. PMID:26733156

  7. Robust video and audio-based synchronization of multimedia files

    NASA Astrophysics Data System (ADS)

    Raichel, Benjamin A.; Bajcsy, Peter

    2010-02-01

    This paper addresses the problem of robust and automated synchronization of multiple audio and video signals. The input signals are from a set of independent multimedia recordings coming from several camcorders and microphones. While the camcorders are static, the microphones are mobile as they are attached to people. The motivation for synchronization of all signals is to support studies and understanding of human interaction in a decision support environment that have been limited so far due to the difficulties in automated processing of any observations during the decision making sessions. The application of our work is to environments supporting decisions. The data sets for this work have been acquired during training exercises of response teams, rescue workers, and fire fighters at multiple locations. The developed synchronization methodology for a set of independent multimedia recordings is based on introducing aural and visual landmarks with a bell and room light switches. Our approach to synchronization is based on detecting the landmarks in audio and video signals per camcorder and per microphone, and then fusing the results to increase robustness and accuracy of the synchronization. We report synchronization results that demonstrate accuracy of synchronization based on video and audio.

  8. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  9. Building Robust Systems with Fallible Construction (Elaboration de systemes informatiques robustes a l’architecutre faillible)

    DTIC Science & Technology

    2008-04-01

    IST-047 Building Robust Systems with Fallible Construction (Elaboration de systèmes informatiques robustes à l’architecture faillible) Final...IST-047 Building Robust Systems with Fallible Construction (Elaboration de systèmes informatiques robustes à l’architecture faillible...and cost investments. ES - 2 RTO-TR-IST-047 Elaboration de systèmes informatiques robustes à l’architecture faillible (RTO-TR-IST-047

  10. TU-G-303-02: Robust Radiomics Methods for PET and CT Imaging

    SciTech Connect

    Aerts, H.

    2015-06-15

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  11. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  12. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.

    PubMed

    Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  13. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  14. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  15. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  16. Robust Optimization of Alginate-Carbopol 940 Bead Formulations

    PubMed Central

    López-Cacho, J. M.; González-R, Pedro L.; Talero, B.; Rabasco, A. M.; González-Rodríguez, M. L.

    2012-01-01

    Formulation process is a very complex activity which sometimes implicates taking decisions about parameters or variables to obtain the best results in a high variability or uncertainty context. Therefore, robust optimization tools can be very useful for obtaining high quality formulations. This paper proposes the optimization of different responses through the robust Taguchi method. Each response was evaluated like a noise variable, allowing the application of Taguchi techniques to obtain a response under the point of view of the signal to noise ratio. A L18 Taguchi orthogonal array design was employed to investigate the effect of eight independent variables involved in the formulation of alginate-Carbopol beads. Responses evaluated were related to drug release profile from beads (t50% and AUC), swelling performance, encapsulation efficiency, shape and size parameters. Confirmation tests to verify the prediction model were carried out and the obtained results were very similar to those predicted in every profile. Results reveal that the robust optimization is a very useful approach that allows greater precision and accuracy to the desired value. PMID:22645438

  17. Robust template matching using run-length encoding

    NASA Astrophysics Data System (ADS)

    Lee, Hunsue; Suh, Sungho; Cho, Hansang

    2015-09-01

    In this paper we propose a novel template matching algorithm for visual inspection of bare printed circuit board (PCB).1 In the conventional template matching for PCB inspection, the matching score and its relevant offsets are acquired by calculating the maximum value among the convolutions of template image and camera image. While the method is fast, the robustness and accuracy of matching are not guaranteed due to the gap between a design and an implementation resulting from defects and process variations. To resolve this problem, we suggest a new method which uses run-length encoding (RLE). For the template image to be matched, we accumulate data of foreground and background, and RLE data for each row and column in the template image. Using the data, we can find the x and y offsets which minimize the optimization function. The efficiency and robustness of the proposed algorithm are verified through a series of experiments. By comparing the proposed algorithm with the conventional approach, we could realize that the proposed algorithm is not only fast but also more robust and reliable in matching results.

  18. Superfast robust digital image correlation analysis with parallel computing

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Tian, Long

    2015-03-01

    Existing digital image correlation (DIC) using the robust reliability-guided displacement tracking (RGDT) strategy for full-field displacement measurement is a path-dependent process that can only be executed sequentially. This path-dependent tracking strategy not only limits the potential of DIC for further improvement of its computational efficiency but also wastes the parallel computing power of modern computers with multicore processors. To maintain the robustness of the existing RGDT strategy and to overcome its deficiency, an improved RGDT strategy using a two-section tracking scheme is proposed. In the improved RGDT strategy, the calculated points with correlation coefficients higher than a preset threshold are all taken as reliably computed points and given the same priority to extend the correlation analysis to their neighbors. Thus, DIC calculation is first executed in parallel at multiple points by separate independent threads. Then for the few calculated points with correlation coefficients smaller than the threshold, DIC analysis using existing RGDT strategy is adopted. Benefiting from the improved RGDT strategy and the multithread computing, superfast DIC analysis can be accomplished without sacrificing its robustness and accuracy. Experimental results show that the presented parallel DIC method performed on a common eight-core laptop can achieve about a 7 times speedup.

  19. Informative Gene Selection and Direct Classification of Tumor Based on Chi-Square Test of Pairwise Gene Interactions

    PubMed Central

    Zhang, Hongyan; Li, Lanzhi; Luo, Chao; Sun, Congwei; Chen, Yuan; Dai, Zhijun; Yuan, Zheming

    2014-01-01

    In efforts to discover disease mechanisms and improve clinical diagnosis of tumors, it is useful to mine profiles for informative genes with definite biological meanings and to build robust classifiers with high precision. In this study, we developed a new method for tumor-gene selection, the Chi-square test-based integrated rank gene and direct classifier (χ2-IRG-DC). First, we obtained the weighted integrated rank of gene importance from chi-square tests of single and pairwise gene interactions. Then, we sequentially introduced the ranked genes and removed redundant genes by using leave-one-out cross-validation of the chi-square test-based Direct Classifier (χ2-DC) within the training set to obtain informative genes. Finally, we determined the accuracy of independent test data by utilizing the genes obtained above with χ2-DC. Furthermore, we analyzed the robustness of χ2-IRG-DC by comparing the generalization performance of different models, the efficiency of different feature-selection methods, and the accuracy of different classifiers. An independent test of ten multiclass tumor gene-expression datasets showed that χ2-IRG-DC could efficiently control overfitting and had higher generalization performance. The informative genes selected by χ2-IRG-DC could dramatically improve the independent test precision of other classifiers; meanwhile, the informative genes selected by other feature selection methods also had good performance in χ2-DC. PMID:25140319

  20. Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.

    2015-02-01

    This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.

  1. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  2. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  3. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  4. [Accuracy of HDL cholesterol measurements].

    PubMed

    Niedmann, P D; Luthe, H; Wieland, H; Schaper, G; Seidel, D

    1983-02-01

    The widespread use of different methods for the determination of HDL-cholesterol (in Europe: sodium phosphotungstic acid/MgCl2) in connection with enzymatic procedures (in the USA: heparin/MnCl2 followed by the Liebermann-Burchard method) but common reference values makes it necessary to evaluate not only accuracy, specificity, and precision of the precipitation step but also of the subsequent cholesterol determination. A high ratio of serum vs. concentrated precipitation reagent (10:1 V/V) leads to the formation of variable amounts of delta-3.5-cholestadiene. This substance is not recognized by cholesterol oxidase but leads to an 1.6 times overestimation by the Liebermann-Burchard method. Therefore, errors in HDL-cholesterol determination should be considered and differences up to 30% may occur between HDL-cholesterol values determined by the different techniques (heparin/MnCl2 - Liebermann-Burchard and NaPW/MgCl2-CHOD-PAP).

  5. High accuracy and visibility-consistent dense multiview stereo.

    PubMed

    Vu, Hoang-Hiep; Labatut, Patrick; Pons, Jean-Philippe; Keriven, Renaud

    2012-05-01

    Since the initial comparison of Seitz et al., the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al., showing the results to compare more than favorably with the current state-of-the-art methods.

  6. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  7. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    PubMed

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  8. MTC: A Fast and Robust Graph-Based Transductive Learning Method.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Geng, Guang-Gang; Liu, Cheng-Lin

    2015-09-01

    Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness.

  9. Robust lineage reconstruction from high-dimensional single-cell data

    PubMed Central

    Giecold, Gregory; Marco, Eugenio; Garcia, Sara P.; Trippa, Lorenzo; Yuan, Guo-Cheng

    2016-01-01

    Single-cell gene expression data provide invaluable resources for systematic characterization of cellular hierarchy in multi-cellular organisms. However, cell lineage reconstruction is still often associated with significant uncertainty due to technological constraints. Such uncertainties have not been taken into account in current methods. We present ECLAIR (Ensemble Cell Lineage Analysis with Improved Robustness), a novel computational method for the statistical inference of cell lineage relationships from single-cell gene expression data. ECLAIR uses an ensemble approach to improve the robustness of lineage predictions, and provides a quantitative estimate of the uncertainty of lineage branchings. We show that the application of ECLAIR to published datasets successfully reconstructs known lineage relationships and significantly improves the robustness of predictions. ECLAIR is a powerful bioinformatics tool for single-cell data analysis. It can be used for robust lineage reconstruction with quantitative estimate of prediction accuracy. PMID:27207878

  10. A network property necessary for concentration robustness

    PubMed Central

    Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-01-01

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications. PMID:27759015

  11. Robustness, canalyzing functions and systems design.

    PubMed

    Rauh, Johannes; Ay, Nihat

    2014-06-01

    We study a notion of knockout robustness of a stochastic map (Markov kernel) that describes a system of several input random variables and one output random variable. Robustness requires that the behaviour of the system does not change if one or several of the input variables are knocked out. Gibbs potentials are used to give a mechanistic description of the behaviour of the system after knockouts. Robustness imposes structural constraints on these potentials. We show that robust systems can be described in terms of suitable interaction families of Gibbs potentials, which allows us to address the problem of systems design. Robustness is also characterized by conditional independence constraints on the joint distribution of input and output. The set of all probability distributions corresponding to robust systems can be decomposed into a finite union of components, and we find parametrizations of the components.

  12. A network property necessary for concentration robustness

    NASA Astrophysics Data System (ADS)

    Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-10-01

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications.

  13. MIDAS robust trend estimator for accurate GPS station velocities without step detection.

    PubMed

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij  = (xj-xi )/(tj-ti ) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  14. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  15. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140

  16. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  17. Robust satisficing and the probability of survival

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2014-01-01

    Concepts of robustness are sometimes employed when decisions under uncertainty are made without probabilistic information. We present a theorem that establishes necessary and sufficient conditions for non-probabilistic robustness to be equivalent to the probability of satisfying the specified outcome requirements. When this holds, probability is enhanced (or maximised) by enhancing (or maximising) robustness. Two further theorems establish important special cases. These theorems have implications for success or survival under uncertainty. Applications to foraging and finance are discussed.

  18. Robust, Adaptive Radar Detection and Estimation

    DTIC Science & Technology

    2015-07-21

    AFRL-OSR-VA-TR-2015-0208 Robust, Adaptive Radar Detection and Estimation Vishal Monga PENNSYLVANIA STATE UNIVERSITY THE Final Report 07/21/2015...Robust, Adaptive Radar Detection and Estimation 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1-0333 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Monga...we develop robust estimators that can adapt to imperfect knowledge of physical constraints using an expected likelihood (EL) approach. We analyze

  19. Robustness enhancement of neurocontroller and state estimator

    NASA Technical Reports Server (NTRS)

    Troudet, Terry

    1993-01-01

    The feasibility of enhancing neurocontrol robustness, through training of the neurocontroller and state estimator in the presence of system uncertainties, is investigated on the example of a multivariable aircraft control problem. The performance and robustness of the newly trained neurocontroller are compared to those for an existing neurocontrol design scheme. The newly designed dynamic neurocontroller exhibits a better trade-off between phase and gain stability margins, and it is significantly more robust to degradations of the plant dynamics.

  20. Designing for Reliability and Robustness

    NASA Technical Reports Server (NTRS)

    Svetlik, Randall G.; Moore, Cherice; Williams, Antony

    2017-01-01

    Long duration spaceflight has a negative effect on the human body, and exercise countermeasures are used on-board the International Space Station (ISS) to minimize bone and muscle loss, combatting these effects. Given the importance of these hardware systems to the health of the crew, this equipment must continue to be readily available. Designing spaceflight exercise hardware to meet high reliability and availability standards has proven to be challenging throughout the time the crewmembers have been living on ISS beginning in 2000. Furthermore, restoring operational capability after a failure is clearly time-critical, but can be problematic given the challenges of troubleshooting the problem from 220 miles away. Several best-practices have been leveraged in seeking to maximize availability of these exercise systems, including designing for robustness, implementing diagnostic instrumentation, relying on user feedback, and providing ample maintenance and sparing. These factors have enhanced the reliability of hardware systems, and therefore have contributed to keeping the crewmembers healthy upon return to Earth. This paper will review the failure history for three spaceflight exercise countermeasure systems identifying lessons learned that can help improve future systems. Specifically, the Treadmill with Vibration Isolation and Stabilization System (TVIS), Cycle Ergometer with Vibration Isolation and Stabilization System (CEVIS), and the Advanced Resistive Exercise Device (ARED) will be reviewed, analyzed, and conclusions identified so as to provide guidance for improving future exercise hardware designs. These lessons learned, paired with thorough testing, offer a path towards reduced system down-time.

  1. A Robust, Microwave Rain Gauge

    NASA Astrophysics Data System (ADS)

    Mansheim, T. J.; Niemeier, J. J.; Kruger, A.

    2008-12-01

    Researchers at The University of Iowa have developed an all-electronic rain gauge that uses microwave sensors operating at either 10 GHz or 23 GHz, and measures the Doppler shift caused by falling raindrops. It is straightforward to interface these sensors with conventional data loggers, or integrate them into a wireless sensor network. A disadvantage of these microwave rain gauges is that they consume significant power when they are operating. However, this may be partially negated by using data loggers' or sensors networks' sleep-wake-sleep mechanism. Advantages of the microwave rain gauges are that one can make them very robust, they cannot clog, they don't have mechanical parts that wear out, and they don't have to be perfectly level. Prototype microwave rain gauges were collocated with tipping-bucket rain gauges, and data were collected for two seasons. At higher rain rates, microwave rain gauge measurements compare well with tipping-bucket measurements. At lower rain rates, the microwave rain gauges provide more detailed information than tipping buckets, which quantize measurement typically in 1 tip per 0.01 inch, or 1 tip per mm of rainfall.

  2. Nanotechnology Based Environmentally Robust Primers

    SciTech Connect

    Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L

    2003-03-18

    An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.

  3. Fast Robust PCA on Graphs

    NASA Astrophysics Data System (ADS)

    Shahid, Nauman; Perraudin, Nathanael; Kalofolias, Vassilis; Puy, Gilles; Vandergheynst, Pierre

    2016-06-01

    Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data.

  4. A Robust Deep Model for Improved Classification of AD/MCI Patients

    PubMed Central

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-01-01

    Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  5. A Robust and Device-Free System for the Recognition and Classification of Elderly Activities

    PubMed Central

    Li, Fangmin; Al-qaness, Mohammed Abdulaziz Aide; Zhang, Yong; Zhao, Bihai; Luan, Xidao

    2016-01-01

    Human activity recognition, tracking and classification is an essential trend in assisted living systems that can help support elderly people with their daily activities. Traditional activity recognition approaches depend on vision-based or sensor-based techniques. Nowadays, a novel promising technique has obtained more attention, namely device-free human activity recognition that neither requires the target object to wear or carry a device nor install cameras in a perceived area. The device-free technique for activity recognition uses only the signals of common wireless local area network (WLAN) devices available everywhere. In this paper, we present a novel elderly activities recognition system by leveraging the fluctuation of the wireless signals caused by human motion. We present an efficient method to select the correct data from the Channel State Information (CSI) streams that were neglected in previous approaches. We apply a Principle Component Analysis method that exposes the useful information from raw CSI. Thereafter, Forest Decision (FD) is adopted to classify the proposed activities and has gained a high accuracy rate. Extensive experiments have been conducted in an indoor environment to test the feasibility of the proposed system with a total of five volunteer users. The evaluation shows that the proposed system is applicable and robust to electromagnetic noise. PMID:27916948

  6. Robust Quantum-Based Interatomic Potentials for Multiscale Modeling in Transition Metals

    SciTech Connect

    Moriarty, J A; Benedict, L X; Glosli, J N; Hood, R Q; Orlikowski, D A; Patel, M V; Soderlind, P; Streitz, F H; Tang, M; Yang, L H

    2005-09-27

    First-principles generalized pseudopotential theory (GPT) provides a fundamental basis for transferable multi-ion interatomic potentials in transition metals and alloys within density-functional quantum mechanics. In the central bcc metals, where multi-ion angular forces are important to materials properties, simplified model GPT or MGPT potentials have been developed based on canonical d bands to allow analytic forms and large-scale atomistic simulations. Robust, advanced-generation MGPT potentials have now been obtained for Ta and Mo and successfully applied to a wide range of structural, thermodynamic, defect and mechanical properties at both ambient and extreme conditions. Selected applications to multiscale modeling discussed here include dislocation core structure and mobility, atomistically informed dislocation dynamics simulations of plasticity, and thermoelasticity and high-pressure strength modeling. Recent algorithm improvements have provided a more general matrix representation of MGPT beyond canonical bands, allowing improved accuracy and extension to f-electron actinide metals, an order of magnitude increase in computational speed for dynamic simulations, and the development of temperature-dependent potentials.

  7. Robust Fixed-Structure Controller Synthesis

    NASA Technical Reports Server (NTRS)

    Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)

    2000-01-01

    The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.

  8. Numerical robust stability estimation in milling process

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhu, Limin; Ding, Han; Xiong, Youlun

    2012-09-01

    The conventional prediction of milling stability has been extensively studied based on the assumptions that the milling process dynamics is time invariant. However, nominal cutting parameters cannot guarantee the stability of milling process at the shop floor level since there exists many uncertain factors in a practical manufacturing environment. This paper proposes a novel numerical method to estimate the upper and lower bounds of Lobe diagram, which is used to predict the milling stability in a robust way by taking into account the uncertain parameters of milling system. Time finite element method, a milling stability theory is adopted as the conventional deterministic model. The uncertain dynamics parameters are dealt with by the non-probabilistic model in which the parameters with uncertainties are assumed to be bounded and there is no need for probabilistic distribution densities functions. By doing so, interval instead of deterministic stability Lobe is obtained, which guarantees the stability of milling process in an uncertain milling environment. In the simulations, the upper and lower bounds of Lobe diagram obtained by the changes of modal parameters of spindle-tool system and cutting coefficients are given, respectively. The simulation results show that the proposed method is effective and can obtain satisfying bounds of Lobe diagrams. The proposed method is helpful for researchers at shop floor to making decision on machining parameters selection.

  9. Robust fluidic connections to freestanding microfluidic hydrogels

    PubMed Central

    Baer, Bradly B.; Larsen, Taylor S. H.

    2015-01-01

    Biomimetic scaffolds approaching physiological scale, whose size and large cellular load far exceed the limits of diffusion, require incorporation of a fluidic means to achieve adequate nutrient/metabolite exchange. This need has driven the extension of microfluidic technologies into the area of biomaterials. While construction of perfusable scaffolds is essentially a problem of microfluidic device fabrication, functional implementation of free-standing, thick-tissue constructs depends upon successful integration of external pumping mechanisms through optimized connective assemblies. However, a critical analysis to identify optimal materials/assembly components for hydrogel substrates has received little focus to date. This investigation addresses this issue directly by evaluating the efficacy of a range of adhesive and mechanical fluidic connection methods to gelatin hydrogel constructs based upon both mechanical property analysis and cell compatibility. Results identify a novel bioadhesive, comprised of two enzymatically modified gelatin compounds, for connecting tubing to hydrogel constructs that is both structurally robust and non-cytotoxic. Furthermore, outcomes from this study provide clear evidence that fluidic interconnect success varies with substrate composition (specifically hydrogel versus polydimethylsiloxane), highlighting not only the importance of selecting the appropriately tailored components for fluidic hydrogel systems but also that of encouraging ongoing, targeted exploration of this issue. The optimization of such interconnect systems will ultimately promote exciting scientific and therapeutic developments provided by microfluidic, cell-laden scaffolds. PMID:26045731

  10. Robust Derivation of Risk Reduction Strategies

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Port, Daniel; Feather, Martin

    2007-01-01

    Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.

  11. Hierarchical feature selection for erythema severity estimation

    NASA Astrophysics Data System (ADS)

    Wang, Li; Shi, Chenbo; Shu, Chang

    2014-10-01

    At present PASI system of scoring is used for evaluating erythema severity, which can help doctors to diagnose psoriasis [1-3]. The system relies on the subjective judge of doctors, where the accuracy and stability cannot be guaranteed [4]. This paper proposes a stable and precise algorithm for erythema severity estimation. Our contributions are twofold. On one hand, in order to extract the multi-scale redness of erythema, we design the hierarchical feature. Different from traditional methods, we not only utilize the color statistical features, but also divide the detect window into small window and extract hierarchical features. Further, a feature re-ranking step is introduced, which can guarantee that extracted features are irrelevant to each other. On the other hand, an adaptive boosting classifier is applied for further feature selection. During the step of training, the classifier will seek out the most valuable feature for evaluating erythema severity, due to its strong learning ability. Experimental results demonstrate the high precision and robustness of our algorithm. The accuracy is 80.1% on the dataset which comprise 116 patients' images with various kinds of erythema. Now our system has been applied for erythema medical efficacy evaluation in Union Hosp, China.

  12. On the Interplay between the Evolvability and Network Robustness in an Evolutionary Biological Network: A Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2011-01-01

    In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563

  13. Machine tool accuracy characterization workshops. Final report, May 5, 1992--November 5 1993

    SciTech Connect

    1995-01-06

    The ability to assess the accuracy of machine tools is required by both tool builders and users. Builders must have this ability in order to predict the accuracy capability of a machine tool for different part geometry`s, to provide verifiable accuracy information for sales purposes, and to locate error sources for maintenance, troubleshooting, and design enhancement. Users require the same ability in order to make intelligent choices in selecting or procuring machine tools, to predict component manufacturing accuracy, and to perform maintenance and troubleshooting. In both instances, the ability to fully evaluate the accuracy capabilities of a machine tool and the source of its limitations is essential for using the tool to its maximum accuracy and productivity potential. This project was designed to transfer expertise in modern machine tool accuracy testing methods from LLNL to US industry, and to educate users on the use and application of emerging standards for machine tool performance testing.

  14. Closed-Loop and Robust Control of Quantum Systems

    PubMed Central

    Wang, Lin-Cheng

    2013-01-01

    For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control) have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA), and reinforcement learning (RL) methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H∞ control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention. PMID:23997680

  15. On the Robustness Properties of M-MRAC

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram

    2012-01-01

    The paper presents performance and robustness analysis of the modified reference model MRAC (model reference adaptive control) or M-MRAC in short, which differs from the conventional MRAC systems by feeding back the tracking error to the reference model. The tracking error feedback gain in concert with the adaptation rate provides an additional capability to regulate not only the transient performance of the tracking error, but also the transient performance of the control signal. This differs from the conventional MRAC systems, in which we have only the adaptation rate as a tool to regulate just the transient performance of the tracking error. It is shown that the selection of the feedback gain and the adaptation rate resolves the tradeoff between the robustness and performance in the sense that the increase in the feedback gain improves the behavior of the adaptive control signal, hence improves the systems robustness to time delays (or unmodeled dynamics), while increasing the adaptation rate improves the tracking performance or systems robustness to parametric uncertainties and external disturbances.

  16. Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.

    PubMed

    Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun

    2016-07-11

    A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°.

  17. A Study of Confidence and Accuracy Using the Rasch Modeling Procedures. Research Report. ETS RR-08-42

    ERIC Educational Resources Information Center

    Paek, Insu; Lee, Jihyun; Stankov, Lazar; Wilson, Mark

    2008-01-01

    This study investigated the relationship between students' actual performance (accuracy) and their subjective judgments of accuracy (confidence) on selected English language proficiency tests. The unidimensional and multidimensional IRT Rasch approaches were used to model the discrepancy between confidence and accuracy at the item and test level…

  18. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  19. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1988-01-01

    Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.

  20. Noise and robustness in phyllotaxis.

    PubMed

    Mirabet, Vincent; Besnard, Fabrice; Vernoux, Teva; Boudaoud, Arezki

    2012-01-01

    A striking feature of vascular plants is the regular arrangement of lateral organs on the stem, known as phyllotaxis. The most common phyllotactic patterns can be described using spirals, numbers from the Fibonacci sequence and the golden angle. This rich mathematical structure, along with the experimental reproduction of phyllotactic spirals in physical systems, has led to a view of phyllotaxis focusing on regularity. However all organisms are affected by natural stochastic variability, raising questions about the effect of this variability on phyllotaxis and the achievement of such regular patterns. Here we address these questions theoretically using a dynamical system of interacting sources of inhibitory field. Previous work has shown that phyllotaxis can emerge deterministically from the self-organization of such sources and that inhibition is primarily mediated by the depletion of the plant hormone auxin through polarized transport. We incorporated stochasticity in the model and found three main classes of defects in spiral phyllotaxis--the reversal of the handedness of spirals, the concomitant initiation of organs and the occurrence of distichous angles--and we investigated whether a secondary inhibitory field filters out defects. Our results are consistent with available experimental data and yield a prediction of the main source of stochasticity during organogenesis. Our model can be related to cellular parameters and thus provides a framework for the analysis of phyllotactic mutants at both cellular and tissular levels. We propose that secondary fields associated with organogenesis, such as other biochemical signals or mechanical forces, are important for the robustness of phyllotaxis. More generally, our work sheds light on how a target pattern can be achieved within a noisy background.

  1. Noise and Robustness in Phyllotaxis

    PubMed Central

    Mirabet, Vincent; Besnard, Fabrice; Vernoux, Teva; Boudaoud, Arezki

    2012-01-01

    A striking feature of vascular plants is the regular arrangement of lateral organs on the stem, known as phyllotaxis. The most common phyllotactic patterns can be described using spirals, numbers from the Fibonacci sequence and the golden angle. This rich mathematical structure, along with the experimental reproduction of phyllotactic spirals in physical systems, has led to a view of phyllotaxis focusing on regularity. However all organisms are affected by natural stochastic variability, raising questions about the effect of this variability on phyllotaxis and the achievement of such regular patterns. Here we address these questions theoretically using a dynamical system of interacting sources of inhibitory field. Previous work has shown that phyllotaxis can emerge deterministically from the self-organization of such sources and that inhibition is primarily mediated by the depletion of the plant hormone auxin through polarized transport. We incorporated stochasticity in the model and found three main classes of defects in spiral phyllotaxis – the reversal of the handedness of spirals, the concomitant initiation of organs and the occurrence of distichous angles – and we investigated whether a secondary inhibitory field filters out defects. Our results are consistent with available experimental data and yield a prediction of the main source of stochasticity during organogenesis. Our model can be related to cellular parameters and thus provides a framework for the analysis of phyllotactic mutants at both cellular and tissular levels. We propose that secondary fields associated with organogenesis, such as other biochemical signals or mechanical forces, are important for the robustness of phyllotaxis. More generally, our work sheds light on how a target pattern can be achieved within a noisy background. PMID:22359496

  2. Firing temperature accuracy of four dental furnaces.

    PubMed

    Haag, Per; Ciber, Edina; Dérand, Tore

    2011-01-01

    In spite of using recommended firing and displayed temperatures, low-fired dental porcelain more often demonstrates unsatisfactory results after firing than porcelain fired at higher temperatures. It could therefore be anticipated that temperatures shown on the display are incorrect, implying that the furnace does not render correct firing programs for low-fired porcelain. The purpose of this study is to investigate deviations from the real temperature during the firing process and also to illustrate the service and maintenance discipline of furnaces at dental laboratories. Totally 20 units of four different types of dental furnaces were selected for testing of temperature accuracy with usage of a digital temperature measurement apparatus, Therma 1. In addition,the staffs at 68 dental laboratories in Sweden were contacted for a telephone interview on furnace brand and on service and maintenance program performed at their laboratories. None of the 20 different dental furnaces in the study could generate the firing temperatures shown on the display, indicating that the hypothesis was correct. Multimat MCII had the least deviation of temperature compared with displayfigures. 62 out of 68 invited dental laboratories chose to participate in the interviews and the result was that very few laboratories had a service and maintenance program living up to quality standards. There is room for improving the precision of dental porcelain furnaces as there are deviations between displayed and read temperatures during the different steps of the firing process.

  3. Classification accuracy of actuarial risk assessment instruments.

    PubMed

    Neller, Daniel J; Frederick, Richard I

    2013-01-01

    Users of commonly employed actuarial risk assessment instruments (ARAIs) hope to generate numerical probability statements about risk; however, ARAI manuals often do not explicitly report data that are essential for understanding the classification accuracy of the instruments. In addition, ARAI manuals often contain data that have the potential for misinterpretation. The authors of the present article address the accurate generation of probability statements. First, they illustrate how the reporting of numerical probability statements based on proportions rather than predictive values can mislead users of ARAIs. Next, they report essential test characteristics that, to date, have gone largely unreported in ARAI manuals. Then they discuss a graphing method that can enhance the practice of clinicians who communicate risk via numerical probability statements. After the authors review several strategies for selecting optimal cut-off scores, they show how the graphing method can be used to estimate positive predictive values for each cut-off score of commonly used ARAIs, across all possible base rates. They also show how the graphing method can be used to estimate base rates of violent recidivism in local samples.

  4. Accuracy potentials for large space antenna reflectors with passive structure

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1982-01-01

    Analytical results indicate that a careful selection of materials and truss design, combined with accurate manufacturing techniques, can result in very accurate surfaces for large space antennas. The purpose of this paper is to examine these relationships for various types of structural configurations. Comparisons are made of the accuracy achievable by truss- and dome-type structures for a wide range of diameter and focal length of the antenna and wavelength of the radiated signal.

  5. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  6. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  7. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    PubMed Central

    Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods. PMID:28316615

  8. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  9. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    PubMed Central

    Prieto, José Carlos; Croux, Christophe; Jiménez, Antonio Ramón

    2009-01-01

    A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered) in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1) a high breakdown point algorithm to obtain a clean sample, and 2) a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries) and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases. PMID:22408522

  10. The Utility of Robust Means in Statistics

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…

  11. Robust Controller Design for Hemispherical Resonator Gyroscope

    DTIC Science & Technology

    2011-11-01

    f v Figure 1. Operating principle of HRG Robust Controller Design for Hemispherical Resonator Gyroscope Chul Hyun1), Byung ...Petersburg, Russia.: 26-34 4) Chul Hyun. 2011. Design of Robust Digital Controller for Hemispherical Resonator Gyroscopes, Ph.D. dissertation, Seoul

  12. Spacecraft attitude determination accuracy from mission experience

    NASA Astrophysics Data System (ADS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-10-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  13. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  14. Robust whole-brain segmentation: application to traumatic brain injury.

    PubMed

    Ledig, Christian; Heckemann, Rolf A; Hammers, Alexander; Lopez, Juan Carlos; Newcombe, Virginia F J; Makropoulos, Antonios; Lötjönen, Jyrki; Menon, David K; Rueckert, Daniel

    2015-04-01

    We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called "Multi-Atlas Label Propagation with Expectation-Maximisation based refinement" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to

  15. Environmental change makes robust ecological networks fragile

    PubMed Central

    Strona, Giovanni; Lafferty, Kevin D.

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers' tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host–parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems. PMID:27511722

  16. Environmental change makes robust ecological networks fragile

    USGS Publications Warehouse

    Strona, Giovanni; Lafferty, Kevin D.

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers’ tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host–parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems.

  17. Evaluating efficiency and robustness in cilia design.

    PubMed

    Guo, Hanliang; Kanso, Eva

    2016-03-01

    Motile cilia are used by many eukaryotic cells to transport flow. Cilia-driven flows are important to many physiological functions, yet a deep understanding of the interplay between the mechanical structure of cilia and their physiological functions in healthy and diseased conditions remains elusive. To develop such an understanding, one needs a quantitative framework to assess cilia performance and robustness when subject to perturbations in the cilia apparatus. Here we link cilia design (beating patterns) to function (flow transport) in the context of experimentally and theoretically derived cilia models. We particularly examine the optimality and robustness of cilia design. Optimality refers to efficiency of flow transport, while robustness is defined as low sensitivity to variations in the design parameters. We find that suboptimal designs can be more robust than optimal ones. That is, designing for the most efficient cilium does not guarantee robustness. These findings have significant implications on the understanding of cilia design in artificial and biological systems.

  18. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  19. Robust GPS attitude determination for spacecraft

    NASA Astrophysics Data System (ADS)

    Adams, John Carl

    2000-11-01

    The space environment presents challenging operating conditions for a GPS attitude receiver. While spacecraft vehicle attitude dynamics may be at a lower rate than terrestrial vehicles or aircraft, they may undergo a much more general attitude motion. For example, for inertially fixed spacecraft or spinning spacecraft there is no preferred orientation of an array of antennas with aligned boresights that will give good GPS constellation visibility and ensure continuous attitude solution availability over an entire orbit. For these cases of general vehicle attitude motion, continuous attitude solution availability can be achieved using an array of antennas pointing in different directions. However, such a non-aligned antenna array requires modifications to the receiver signal processing algorithms to make attitude determination possible and to achieve accuracies comparable to an aligned array. Non-aligned antenna arrays introduce LOS dependent phase errors into the differential carrier phase measurements, the most significant of which are; the right hand circular polarization (RHCP) contribution to the carrier phase, antenna phase center variations, and multipath error. In this dissertation, the results of two experimental laboratory demonstrations of attitude determination using measurements from a non-aligned antenna array are presented, along with a simulation study for the on-orbit case. In the first laboratory demonstration, attitude estimation was done (in post processing) using a passive, spinning platform with only single degree of freedom rotation, in a laboratory environment that had very significant reflected multipath signals, and near transmitters. The experiment used a 2-D air cushion vehicle on a granite table top with a single antenna baseline measurement. The results of this experiment showed that robust attitude solutions could be attained from differential carrier phase measurements taken from an array of non-aligned antennas, but also highlighted

  20. Aircraft ride quality controller design using new robust root clustering theory for linear uncertain systems

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The aspect of controller design for improving the ride quality of aircraft in terms of damping ratio and natural frequency specifications on the short period dynamics is addressed. The controller is designed to be robust with respect to uncertainties in the real parameters of the control design model such as uncertainties in the dimensional stability derivatives, imperfections in actuator/sensor locations and possibly variations in flight conditions, etc. The design is based on a new robust root clustering theory developed by the author by extending the nominal root clustering theory of Gutman and Jury to perturbed matrices. The proposed methodology allows to get an explicit relationship between the parameters of the root clustering region and the uncertainty radius of the parameter space. The current literature available for robust stability becomes a special case of this unified theory. The bounds derived on the parameter perturbation for robust root clustering are then used in selecting the robust controller.

  1. Micro-vision-based displacement measurement with high accuracy

    NASA Astrophysics Data System (ADS)

    Lu, Qinghua; Zhang, Xianmin; Fan, Yanbin

    2011-12-01

    The micro-motion stages are widely used in micro/nano manufacturing technology. In this paper, an integrated approach for measuring micro-displacement of micro-motion stage that incorporates motion estimation algorithm into the computer microvision is proposed. At first, the basic principle of the computer microvision measurement is analyzed. Then, a robust multiscale motion estimation algorithm for micro-motion measurement is proposed. Finally, the microdisplacement of the micro-motion stage based on the piezoelectric ceramic actuators and the compliant mechanisms is measured using the integrated approach. The maximal bias of the proposed approach reached 13 nm. Experimental results show that the new integrated method can measure micro-displacement with nanometer accuracy.

  2. Towards Robust Discontinuous Galerkin Methods for General Relativistic Neutrino Radiation Transport

    NASA Astrophysics Data System (ADS)

    Endeve, E.; Hauck, C. D.; Xing, Y.; Mezzacappa, A.

    2015-10-01

    With an eye towards simulating neutrino transport in core-collapse supernovae, we have developed a conservative, robust, and high-order numerical method for solving the general relativistic phase space advection problem in stationary spacetimes. The method achieves high-order accuracy using Discontinuous Galerkin discretization and Runge-Kutta time integration. For robustness, care is taken to ensure that the physical bounds on the phase space distribution function are preserved; i.e., f ∈ [0,1]. We briefly describe the bound-preserving scheme, and present results from numerical experiments in spherical symmetry adopting the Schwarzschild metric, which demonstrate that the method preserves the bounds on the distribution function.

  3. A novel ensemble machine learning for robust microarray data classification.

    PubMed

    Peng, Yonghong

    2006-06-01

    Microarray data analysis and classification has demonstrated convincingly that it provides an effective methodology for the effective diagnosis of diseases and cancers. Although much research has been performed on applying machine learning techniques for microarray data classification during the past years, it has been shown that conventional machine learning techniques have intrinsic drawbacks in achieving accurate and robust classifications. This paper presents a novel ensemble machine learning approach for the development of robust microarray data classification. Different from the conventional ensemble learning techniques, the approach presented begins with generating a pool of candidate base classifiers based on the gene sub-sampling and then the selection of a sub-set of appropriate base classifiers to construct the classification committee based on classifier clustering. Experimental results have demonstrated that the classifiers constructed by the proposed method outperforms not only the classifiers generated by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods (bagging and boosting).

  4. Building a robust vehicle detection and classification module

    NASA Astrophysics Data System (ADS)

    Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry

    2015-12-01

    The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.

  5. Stochastic Satbility and Performance Robustness of Linear Multivariable Systems

    NASA Technical Reports Server (NTRS)

    Ryan, Laurie E.; Stengel, Robert F.

    1990-01-01

    Stochastic robustness, a simple technique used to estimate the robustness of linear, time invariant systems, is applied to a single-link robot arm control system. Concepts behind stochastic stability robustness are extended to systems with estimators and to stochastic performance robustness. Stochastic performance robustness measures based on classical design specifications are introduced, and the relationship between stochastic robustness measures and control system design parameters are discussed. The application of stochastic performance robustness, and the relationship between performance objectives and design parameters are demonstrated by means of example. The results prove stochastic robustness to be a good overall robustness analysis method that can relate robustness characteristics to control system design parameters.

  6. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    PubMed Central

    He, Shi-wei; Song, Rui; Sun, Yang; Li, Hao-dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable. PMID:25435867

  7. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    PubMed

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  8. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  9. Robust stabilization of rotor-active magnetic bearing systems

    NASA Astrophysics Data System (ADS)

    Li, Guoxin

    Active magnetic bearings (AMBs) are emerging as a beneficial technology for high-speed and high-performance suspensions in rotating machinery applications. A fundamental feedback control problem is robust stabilization in the presence of uncertain destabilizing mechanisms in aeroelastic, hydroelastic dynamics, and AMB feedback. As rotating machines are evolving in achieving high speed, high energy density, and high performance, the rotor and the support structure become increasingly flexible, and highly coupled. This makes rotor-AMB system more challenging to stabilize. The primary objective of this research is to develop a systematic control synthesis procedure for achieving highly robust stabilization of rotor-AMB systems. Of special interest is the stabilization of multivariable systems such as the AMB supported flexible rotors and gyroscopic rotors, where the classical control design may encounter difficulties. To this end, we first developed a systematic modeling procedure. This modeling procedure exploited the best advantages of technology developed in rotordynamics and the unique system identification tool provided by the AMBs. A systematic uncertainty model for rotor-AMB systems was developed, eliminating the iterative process of selecting uncertainty structures. The consequences of overestimation or underestimation of uncertainties were made transparent to control engineers. To achieve high robustness, we explored the fundamental performance/robustness limitations due to rotor-AMB system unstable poles. We examined the mixed sensitivity performance that is closely related to the unstructured uncertainty. To enhance transparency of the synthesis, we analyzed multivariable controllers from classical control perspectives. Based on these results, a systematic robust control synthesis procedure was established. For a strong gyroscopic rotor over a wide speed range, we applied the advanced gain-scheduled synthesis, and compared two synthesis frameworks in

  10. Practical control of timestep selection in thermal simulation

    SciTech Connect

    Sammon, P.H.; Rubin, B.

    1983-11-01

    This paper describes an error driven timestep selection scheme that utilizes a new technique for estimating time truncation error. The calculation of this estimate uses only current reservoir information. Thus, information from past timesteps, such as that required by the differencing technique of Mehra et al or Jensen, is not required. This consideration is particularly important if past reservoir states do not correspond well to the current one, such as might occur when relatively large timesteps are being taken or when a model formulation change occurs. Once the time truncation error estimate is obtained, techniques from Lindberg are used to calculate an appropriate timestep. The calculation introduces little additional simulator overhead and can be easily integrated into any fully implicit reservoir simulator. An implementation of the above timestep selection technique is described for a thermal reservoir simulator. A provision for limiting the timestep by user input desired changes is provided and a method for estimating the results of certain Newton iterations using timestep information is described. The timestep selection scheme also has the ability to calculate a timestep at well changes, relieving the user from the necessity of estimating these parameters. The timestep selection algorithm is intended to afford greater simulator accuracy and reliability. The algorithm's robustness also affords greater efficiency when compared to damped schemes such as that presented in Grabowski et al. The parameters controlling the error-driven timestep calculation are easily selected by the user. It is typically found that the new method can be quickly made to perform as well or better than a simulation run using desired changes with carefully selected parameters. Also, the accuracy of the results obtained with the new method are superior.

  11. Robust GPS carrier tracking under ionospheric scintillation

    NASA Astrophysics Data System (ADS)

    Susi, M.; Andreotti, M.; Aquino, M. H.; Dodson, A.

    2013-12-01

    Small scale irregularities present in the ionosphere can induce fast and unpredictable fluctuations of Radio Frequency (RF) signal phase and amplitude. This phenomenon, known as scintillation, can degrade the performance of a GPS receiver leading to cycle slips, increasing the tracking error and also producing a complete loss of lock. In the most severe scenarios, if the tracking of multiple satellites links is prevented, outages in the GPS service can also occur. In order to render a GPS receiver more robust under scintillation, particular attention should be dedicated to the design of the carrier tracking stage, that is the receiver's part most sensitive to these types of phenomenon. This paper exploits the reconfigurability and flexibility of a GPS software receiver to develop a tracking algorithm that is more robust under ionospheric scintillation. For this purpose, first of all, the scintillation level is monitored in real time. Indeed the carrier phase and the post correlation terms obtained by the PLL (Phase Locked Loop) are used to estimate phi60 and S4 [1], the scintillation indices traditionally used to quantify the level of phase and amplitude scintillations, as well as p and T, the spectral parameters of the fluctuations PSD. The effectiveness of the scintillation parameter computation is confirmed by comparing the values obtained by the software receiver and the ones provided by a commercial scintillation monitoring, i.e. the Septentrio PolarxS receiver [2]. Then the above scintillation parameters and the signal carrier to noise density are exploited to tune the carrier tracking algorithm. In case of very weak signals the FLL (Frequency Locked Loop) scheme is selected in order to maintain the signal lock. Otherwise an adaptive bandwidth Phase Locked Loop (PLL) scheme is adopted. The optimum bandwidth for the specific scintillation scenario is evaluated in real time by exploiting the Conker formula [1] for the tracking jitter estimation. The performance

  12. Robust Multiobjective Controllability of Complex Neuronal Networks.

    PubMed

    Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen

    2016-01-01

    This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc.

  13. Robustness to Faults Promotes Evolvability: Insights from Evolving Digital Circuits.

    PubMed

    Milano, Nicola; Nolfi, Stefano

    2016-01-01

    We demonstrate how the need to cope with operational faults enables evolving circuits to find more fit solutions. The analysis of the results obtained in different experimental conditions indicates that, in absence of faults, evolution tends to select circuits that are small and have low phenotypic variability and evolvability. The need to face operation faults, instead, drives evolution toward the selection of larger circuits that are truly robust with respect to genetic variations and that have a greater level of phenotypic variability and evolvability. Overall our results indicate that the need to cope with operation faults leads to the selection of circuits that have a greater probability to generate better circuits as a result of genetic variation with respect to a control condition in which circuits are not subjected to faults.

  14. Towards an efficient and robust foot classification from pedobarographic images.

    PubMed

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2012-01-01

    This paper presents a new computational framework for automatic foot classification from digital plantar pressure images. It classifies the foot as left or right and simultaneously calculates two well-known footprint indices: the Cavanagh's arch index (AI) and the modified AI. The accuracy of the framework was evaluated using a set of plantar pressure images from two common pedobarographic devices. The results were outstanding, as all feet under analysis were correctly classified as left or right and no significant differences were observed between the footprint indices calculated using the computational solution and the traditional manual method. The robustness of the proposed framework to arbitrary foot orientations and to the acquisition device was also tested and confirmed.

  15. Arduino-based noise robust online heart-rate detection.

    PubMed

    Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda

    2017-04-01

    This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.

  16. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  17. Robust Extreme Learning Machine With its Application to Indoor Positioning.

    PubMed

    Lu, Xiaoxuan; Zou, Han; Zhou, Hongming; Xie, Lihua; Huang, Guang-Bin

    2016-01-01

    The increasing demands of location-based services have spurred the rapid development of indoor positioning system and indoor localization system interchangeably (IPSs). However, the performance of IPSs suffers from noisy measurements. In this paper, two kinds of robust extreme learning machines (RELMs), corresponding to the close-to-mean constraint, and the small-residual constraint, have been proposed to address the issue of noisy measurements in IPSs. Based on whether the feature mapping in extreme learning machine is explicit, we respectively provide random-hidden-nodes and kernelized formulations of RELMs by second order cone programming. Furthermore, the computation of the covariance in feature space is discussed. Simulations and real-world indoor localization experiments are extensively carried out and the results demonstrate that the proposed algorithms can not only improve the accuracy and repeatability, but also reduce the deviation and worst case error of IPSs compared with other baseline algorithms.

  18. Shadow Areas Robust Matching Among Image Sequence in Planetary Landing

    NASA Astrophysics Data System (ADS)

    Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin

    2016-12-01

    In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.

  19. Shadow Areas Robust Matching Among Image Sequence in Planetary Landing

    NASA Astrophysics Data System (ADS)

    Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin

    2017-01-01

    In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.

  20. Simple robust autotuning rules for 2-DoF PI controllers.

    PubMed

    Vilanova, R; Alfaro, V M; Arrieta, O

    2012-01-01

    This paper addresses the problem of providing simple tuning rules for a Two-Degree-of-Freedom (2-DoF) PI controller (PI(2)) with robustness considerations. The introduction of robustness as a matter of primary concern is by now well established among the control community. Among the different ways of introducing a robustness constraint into the design stage, the purpose of this paper is to use the maximum sensitivity value as the design parameter. In order to deal with the well known performance/robustness tradeoff, an analysis is conducted first that allows the determination of the lowest closed-loop time constant that guarantees a desired robustness. From that point, an analytical design is conducted for the assignment of the load-disturbance dynamics followed by the tuning of the set-point weight factor in order to match, as much as possible, the set-point-to-output dynamics according to a first-order-plus-dead-time dynamics. Simple tuning rules are generated by considering specific values for the maximum sensitivity value. These tuning rules, provide all the controller parameters parameterized in terms of the open-loop normalized dead-time allowing the user to select a high/medium/low robust closed-loop control system. The proposed autotuning expressions are therefore compared with other well known tuning rules also conceived by using the same robustness measure, showing that the proposed approach is able to guarantee the same robustness level and improve the system time performance.