Science.gov

Sample records for accuracy selectivity robustness

  1. Geometrical constraints for robust tractography selection.

    PubMed

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2013-11-01

    Tract-based analysis from DTI has become a widely employed procedure to study the white matter of the brain and its alterations in neurological and neurosurgical pathologies. Automatic tractography selection methods, where a subset of detected tracts corresponding to a specific white matter structure are selected, are a key component of the DTI processing pipeline. Using automatic tractography selection, repeatable results free of intra and inter-expert variability can be obtained rapidly, without the need for cumbersome manual segmentation. Many of the current approaches for automatic tractography selection rely on a previous registration procedure using an atlas; hence, these methods are likely very sensitive to the accuracy of the registration. In this paper we show that the performance of the registration step is critical to the overall result. This effect can in turn affect the calculation of scalar parameters derived subsequently from the selected tracts and often used in clinical practice; we show that such errors may be comparable in magnitude to the subtle differences found in clinical studies to differentiate between healthy and pathological. As an alternative, we propose a tractography selection method based on the use of geometrical constraints specific for each fiber bundle. Our experimental results show that the approach proposed performs with increased robustness and accuracy with respect to other approaches in the literature, particularly in the presence of imperfect registration. PMID:23707405

  2. Selection for Robustness in Mutagenized RNA Viruses

    PubMed Central

    Furió, Victoria; Holmes, Edward C; Moya, Andrés

    2007-01-01

    Mutational robustness is defined as the constancy of a phenotype in the face of deleterious mutations. Whether robustness can be directly favored by natural selection remains controversial. Theory and in silico experiments predict that, at high mutation rates, slow-replicating genotypes can potentially outcompete faster counterparts if they benefit from a higher robustness. Here, we experimentally validate this hypothesis, dubbed the “survival of the flattest,” using two populations of the vesicular stomatitis RNA virus. Characterization of fitness distributions and genetic variability indicated that one population showed a higher replication rate, whereas the other was more robust to mutation. The faster replicator outgrew its robust counterpart in standard competition assays, but the outcome was reversed in the presence of chemical mutagens. These results show that selection can directly favor mutational robustness and reveal a novel viral resistance mechanism against treatment by lethal mutagenesis. PMID:17571922

  3. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  4. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  5. Selection for mutational robustness in finite populations.

    PubMed

    Forster, Robert; Adami, Christoph; Wilke, Claus O

    2006-11-21

    We investigate the evolutionary dynamics of a finite population of RNA sequences replicating on a neutral network. Despite the lack of differential fitness between viable sequences, we observe typical properties of adaptive evolution, such as increase of mean fitness over time and punctuated-equilibrium transitions, after initial mutation-selection balance has been reached. We find that a product of population size and mutation rate of approximately 30 or larger is sufficient to generate selection pressure for mutational robustness, even if the population size is orders of magnitude smaller than the neutral network on which the population resides. Our results show that quasispecies effects and neutral drift can occur concurrently, and that the relative importance of each is determined by the product of population size and mutation rate. PMID:16901510

  6. Robustness and Accuracy in Sea Urchin Developmental Gene Regulatory Networks

    PubMed Central

    Ben-Tabou de-Leon, Smadar

    2016-01-01

    Developmental gene regulatory networks robustly control the timely activation of regulatory and differentiation genes. The structure of these networks underlies their capacity to buffer intrinsic and extrinsic noise and maintain embryonic morphology. Here I illustrate how the use of specific architectures by the sea urchin developmental regulatory networks enables the robust control of cell fate decisions. The Wnt-βcatenin signaling pathway patterns the primary embryonic axis while the BMP signaling pathway patterns the secondary embryonic axis in the sea urchin embryo and across bilateria. Interestingly, in the sea urchin in both cases, the signaling pathway that defines the axis controls directly the expression of a set of downstream regulatory genes. I propose that this direct activation of a set of regulatory genes enables a uniform regulatory response and a clear cut cell fate decision in the endoderm and in the dorsal ectoderm. The specification of the mesodermal pigment cell lineage is activated by Delta signaling that initiates a triple positive feedback loop that locks down the pigment specification state. I propose that the use of compound positive feedback circuitry provides the endodermal cells enough time to turn off mesodermal genes and ensures correct mesoderm vs. endoderm fate decision. Thus, I argue that understanding the control properties of repeatedly used regulatory architectures illuminates their role in embryogenesis and provides possible explanations to their resistance to evolutionary change. PMID:26913048

  7. Robustness versus accuracy in shock-wave computations

    NASA Astrophysics Data System (ADS)

    Gressier, Jérémie; Moschetta, Jean-Marc

    2000-06-01

    Despite constant progress in the development of upwind schemes, some failings still remain. Quirk recently reported (Quirk JJ. A contribution to the great Riemann solver debate. International Journal for Numerical Methods in Fluids 1994; 18: 555-574) that approximate Riemann solvers, which share the exact capture of contact discontinuities, generally suffer from such failings. One of these is the odd-even decoupling that occurs along planar shocks aligned with the mesh. First, a few results on some failings are given, namely the carbuncle phenomenon and the kinked Mach stem. Then, following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme (Roe PL, Approximate Riemann solvers, parameters vectors, and difference schemes, Journal of Computational Physics 1981; 43: 357-372), the Equilibrium Flux Method (Pullin DI, Direct simulation methods for compressible inviscid ideal gas flow, Journal of Computational Physics 1980; 34: 231-244), the Equilibrium Interface Method (Macrossan MN, Oliver. RI, A kinetic theory solution method for the Navier-Stokes equations, International Journal for Numerical Methods in Fluids 1993; 17: 177-193) and the AUSM scheme (Liou MS, Steffen CJ, A new flux splitting scheme, Journal of Computational Physics 1993; 107: 23-39). Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established. Copyright

  8. Robust feature selection for microarray data based on multicriterion fusion.

    PubMed

    Yang, Feng; Mao, K Z

    2011-01-01

    Feature selection often aims to select a compact feature subset to build a pattern classifier with reduced complexity, so as to achieve improved classification performance. From the perspective of pattern analysis, producing stable or robust solution is also a desired property of a feature selection algorithm. However, the issue of robustness is often overlooked in feature selection. In this study, we analyze the robustness issue existing in feature selection for high-dimensional and small-sized gene-expression data, and propose to improve robustness of feature selection algorithm by using multiple feature selection evaluation criteria. Based on this idea, a multicriterion fusion-based recursive feature elimination (MCF-RFE) algorithm is developed with the goal of improving both classification performance and stability of feature selection results. Experimental studies on five gene-expression data sets show that the MCF-RFE algorithm outperforms the commonly used benchmark feature selection algorithm SVM-RFE. PMID:21566255

  9. Selected issues on robust testing for normality

    NASA Astrophysics Data System (ADS)

    Moder, Karl; Střelec, Luboš; Stehlík, Milan

    2013-10-01

    Normal distribution is mostly used distribution in statistics, dating back to the Karl F. Gauss. It is used in many branches of statistics, however, testing for normality is not well understood. But which deviations from theoretical normality are still acceptable for a given statistical procedure? This contribution aims towards better understanding of such problems. In particular, we study how much effects the violation of ANOVA prerequisites the underlying inference. It is clear, that one should develop a proper robustness in a given setup, under which the statistical analysis is still reliable. We also study the influence of outliers in dataset, in particular with focus on the tradeoff between power and robustness.

  10. Robust quantification of orientation selectivity and direction selectivity

    PubMed Central

    Mazurek, Mark; Kager, Marisa; Van Hooser, Stephen D.

    2014-01-01

    Neurons in the visual cortex of all examined mammals exhibit orientation or direction tuning. New imaging techniques are allowing the circuit mechanisms underlying orientation and direction selectivity to be studied with clarity that was not possible a decade ago. However, these new techniques bring new challenges: robust quantitative measurements are needed to evaluate the findings from these studies, which can involve thousands of cells of varying response strength. Here we show that traditional measures of selectivity such as the orientation index (OI) and direction index (DI) are poorly suited for quantitative evaluation of orientation and direction tuning. We explore several alternative methods for quantifying tuning and for addressing a variety of questions that arise in studies on orientation- and direction-tuned cells and cell populations. We provide recommendations for which methods are best suited to which applications and we offer tips for avoiding potential pitfalls in applying these methods. Our goal is to supply a solid quantitative foundation for studies involving orientation and direction tuning. PMID:25147504

  11. Turning science on robust cattle into improved genetic selection decisions.

    PubMed

    Amer, P R

    2012-04-01

    More robust cattle have the potential to increase farm profitability, improve animal welfare, reduce the contribution of ruminant livestock to greenhouse gas emissions and decrease the risk of food shortages in the face of increased variability in the farm environment. Breeding is a powerful tool for changing the robustness of cattle; however, insufficient recording of breeding goal traits and selection of animals at younger ages tend to favour genetic change in productivity traits relative to robustness traits. This paper has extended a previously proposed theory of artificial evolution to demonstrate, using deterministic simulation, how choice of breeding scheme design can be used as a tool to manipulate the direction of genetic progress, whereas the breeding goal remains focussed on the factors motivating individual farm decision makers. Particular focus was placed on the transition from progeny testing or mass selection to genomic selection breeding strategies. Transition to genomic selection from a breeding strategy where candidates are selected before records from progeny being available was shown to be highly likely to favour genetic progress in robustness traits relative to productivity traits. This was shown even with modest numbers of animals available for training and when heritability for robustness traits was only slightly lower than that for productivity traits. When transitioning from progeny testing to a genomic selection strategy without progeny testing, it was shown that there is a significant risk that robustness traits could become less influential in selection relative to productivity traits. Augmentations of training populations using genotyped cows and support for industry-wide improvements in phenotypic recording of robustness traits were put forward as investment opportunities for stakeholders wishing to facilitate the application of science on robust cattle into improved genetic selection schemes. PMID:22436269

  12. Genomic selection in forage breeding: accuracy and methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The main benefits expected from genomic selection in forage grasses and legumes are to increase selection accuracy, reduce evaluation costs per genotype, and reduce cycle time. Aiming at designing a training population and first generations of selection, deterministic equations were used to compare ...

  13. Model selection based on robustness criterion with measurement application

    NASA Astrophysics Data System (ADS)

    Brahim-Belhouari, Sofiane; Fleury, Gilles; Davoust, Marie-Eve

    1999-06-01

    Huber's approach to robust estimation is highly fruitful for solving estimation problems with contaminated data or under incomplete information according to the error structure. A simple selection procedure based on robustness to variations of the errors distribution from the assumed one, is proposed. Minimax M-estimator is used to estimate efficiently the parameters and the measurement quantity. A performance deviation criterion is computed by the mean of the Monte Carlo method improved by the Latin Hypercube Sampling. The selection produced is applied to a real measurement problem, grooves dimensioning using Remote Field Eddy Current inspection.

  14. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  15. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  16. An efficient camera calibration technique offering robustness and accuracy over a wide range of lens distortion.

    PubMed

    Rahman, Taufiqur; Krouglicof, Nicholas

    2012-02-01

    In the field of machine vision, camera calibration refers to the experimental determination of a set of parameters that describe the image formation process for a given analytical model of the machine vision system. Researchers working with low-cost digital cameras and off-the-shelf lenses generally favor camera calibration techniques that do not rely on specialized optical equipment, modifications to the hardware, or an a priori knowledge of the vision system. Most of the commonly used calibration techniques are based on the observation of a single 3-D target or multiple planar (2-D) targets with a large number of control points. This paper presents a novel calibration technique that offers improved accuracy, robustness, and efficiency over a wide range of lens distortion. This technique operates by minimizing the error between the reconstructed image points and their experimentally determined counterparts in "distortion free" space. This facilitates the incorporation of the exact lens distortion model. In addition, expressing spatial orientation in terms of unit quaternions greatly enhances the proposed calibration solution by formulating a minimally redundant system of equations that is free of singularities. Extensive performance benchmarking consisting of both computer simulation and experiments confirmed higher accuracy in calibration regardless of the amount of lens distortion present in the optics of the camera. This paper also experimentally confirmed that a comprehensive lens distortion model including higher order radial and tangential distortion terms improves calibration accuracy. PMID:21843988

  17. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331

  18. Accuracy of genomic selection for BCWD resistance in rainbow trout

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonids. In this study, we aimed to (1) predict genomic breeding values (GEBV) by genotyping training (n=583) and validation samples (n=53) with a SNP50K chip; and (2) assess the accuracy of genomic selection (GS) for BCWD r...

  19. Selective Gammatone Envelope Feature for Robust Sound Event Recognition

    NASA Astrophysics Data System (ADS)

    Leng, Yi Ren; Tran, Huy Dat; Kitaoka, Norihide; Li, Haizhou

    Conventional features for Automatic Speech Recognition and Sound Event Recognition such as Mel-Frequency Cepstral Coefficients (MFCCs) have been shown to perform poorly in noisy conditions. We introduce an auditory feature based on the gammatone filterbank, the Selective Gammatone Envelope Feature (SGEF), for Robust Sound Event Recognition where channel selection and the filterbank envelope is used to reduce the effect of noise for specific noise environments. In the experiments with Hidden Markov Model (HMM) recognizers, we shall show that our feature outperforms MFCCs significantly in four different noisy environments at various signal-to-noise ratios.

  20. Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2011-12-01

    Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.

  1. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  2. On accuracy, robustness, and security of bag-of-word search systems

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Svyatoslav; Diephuis, Maurits; Kostadinov, Dimche; Farhadzadeh, Farzad; Holotyak, Taras

    2014-02-01

    In this paper, we present a statistical framework for the analysis of the performance of Bag-of-Words (BOW) systems. The paper aims at establishing a better understanding of the impact of different elements of BOW systems such as the robustness of descriptors, accuracy of assignment, descriptor compression and pooling and finally decision making. We also study the impact of geometrical information on the BOW system performance and compare the results with different pooling strategies. The proposed framework can also be of interest for a security and privacy analysis of BOW systems. The experimental results on real images and descriptors confirm our theoretical findings. Notation: We use capital letters to denote scalar random variables X and X to denote vector random variables, corresponding small letters x and x to denote the realisations of scalar and vector random variables, respectively. We use X pX(x) or simply X p(x) to indicate that a random variable X is distributed according to pX(x). N(μ, σ 2 X ) stands for the Gaussian distribution with mean μ and variance σ2 X . B(L, Pb) denotes the binomial distribution with sequence length L and probability of success Pb. ||.|| denotes the Euclidean vector norm and Q(.) stands for the Q-function. D(.||.) denotes the divergence and E{.} denotes the expectation.

  3. The Signatures of Selection for Translational Accuracy in Plant Genes

    PubMed Central

    Porceddu, Andrea; Zenoni, Sara; Camiolo, Salvatore

    2013-01-01

    Little is known about the natural selection of synonymous codons within the coding sequences of plant genes. We analyzed the distribution of synonymous codons within plant coding sequences and found that preferred codons tend to encode the more conserved and functionally important residues of plant proteins. This was consistent among several synonymous codon families and applied to genes with different expression profiles and functions. Most of the randomly chosen alternative sets of codons scored weaker associations than the actual sets of preferred codons, suggesting that codon position within plant genes and codon usage bias have coevolved to maximize translational accuracy. All these findings are consistent with the mistranslation-induced protein misfolding theory, which predicts the natural selection of highly preferred codons more frequently at sites where translation errors could compromise protein folding or functionality. Our results will provide an important insight in future studies of protein folding, molecular evolution, and transgene design for optimal expression. PMID:23695187

  4. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  5. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  6. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  7. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding

    PubMed Central

    Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  8. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  9. Analysis of the Accuracy and Robustness of the Leap Motion Controller

    PubMed Central

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-01-01

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678

  10. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  11. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism

    PubMed Central

    Zhong, Bineng; Zhang, Jun; Wang, Pengfei; Du, Jixiang; Chen, Duansheng

    2016-01-01

    To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. PMID:27575684

  12. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.

    PubMed

    Zhong, Bineng; Zhang, Jun; Wang, Pengfei; Du, Jixiang; Chen, Duansheng

    2016-01-01

    To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. PMID:27575684

  13. Accuracy and robustness of a simple algorithm to measure vessel diameter from B-mode ultrasound images.

    PubMed

    Hunt, Brian E; Flavin, Daniel C; Bauschatz, Emily; Whitney, Heather M

    2016-06-01

    Measurement of changes in arterial vessel diameter can be used to assess the state of cardiovascular health, but the use of such measurements as biomarkers is contingent upon the accuracy and robustness of the measurement. This work presents a simple algorithm for measuring diameter from B-mode images derived from vascular ultrasound. The algorithm is based upon Gaussian curve fitting and a Viterbi search process. We assessed the accuracy of the algorithm by measuring the diameter of a digital reference object (DRO) and ultrasound-derived images of a carotid artery. We also assessed the robustness of the algorithm by manipulating the quality of the image. Across a broad range of signal-to-noise ratio and with varying image edge error, the algorithm measured vessel diameter within 0.7% of the creation dimensions of the DRO. This was a similar level of difference (0.8%) to when an ultrasound image was used. When SNR dropped to 18 dB, measurement error increased to 1.3%. When edge position was varied by as much as 10%, measurement error was well maintained between 0.68 and 0.75%. All these errors fall well within the margin of error established by the medical physics community for quantitative ultrasound measurements. We conclude that this simple algorithm provides consistent and accurate measurement of lumen diameter from B-mode images across a broad range of image quality. PMID:27055985

  14. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy.

    PubMed

    Du, Weiliang; Yang, James

    2009-02-01

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 +/- 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation. PMID:19124954

  15. Balancing accuracy and efficiency in selecting vibrational configuration interaction basis states using vibrational perturbation theory

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-08-01

    This work describes the benchmarking of a vibrational configuration interaction (VCI) algorithm that combines the favourable computational scaling of VPT2 with the algorithmic robustness of VCI, in which VCI basis states are selected according to the magnitude of their contribution to the VPT2 energy, for the ground state and fundamental excited states. Particularly novel aspects of this work include: expanding the potential to 6th order in normal mode coordinates, using a double-iterative procedure in which configuration selection and VCI wavefunction updates are performed iteratively (micro-iterations) over a range of screening threshold values (macro-iterations), and characterisation of computational resource requirements as a function of molecular size. Computational costs may be further reduced by a priori truncation of the VCI wavefunction according to maximum extent of mode coupling, along with discarding negligible force constants and VCI matrix elements, and formulating the wavefunction in a harmonic oscillator product basis to enable efficient evaluation of VCI matrix elements. Combining these strategies, we define a series of screening procedures that scale as O ( Nmode 6 ) - O ( Nmode 9 ) in run time and O ( Nmode 6 ) - O ( Nmode 7 ) in memory, depending on the desired level of accuracy. Our open-source code is freely available for download from http://www.sourceforge.net/projects/pyvci-vpt2.

  16. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  17. Robust Texture Image Representation by Scale Selective Local Binary Patterns.

    PubMed

    Guo, Zhenhua; Wang, Xingzheng; Zhou, Jie; You, Jane

    2016-02-01

    Local binary pattern (LBP) has successfully been used in computer vision and pattern recognition applications, such as texture recognition. It could effectively address grayscale and rotation variation. However, it failed to get desirable performance for texture classification with scale transformation. In this paper, a new method based on dominant LBP in scale space is proposed to address scale variation for texture classification. First, a scale space of a texture image is derived by a Gaussian filter. Then, a histogram of pre-learned dominant LBPs is built for each image in the scale space. Finally, for each pattern, the maximal frequency among different scales is considered as the scale invariant feature. Extensive experiments on five public texture databases (University of Illinois at Urbana-Champaign, Columbia Utrecht Database, Kungliga Tekniska Högskolan-Textures under varying Illumination, Pose and Scale, University of Maryland, and Amsterdam Library of Textures) validate the efficiency of the proposed feature extraction scheme. Coupled with the nearest subspace classifier, the proposed method could yield competitive results, which are 99.36%, 99.51%, 99.39%, 99.46%, and 99.71% for UIUC, CUReT, KTH-TIPS, UMD, and ALOT, respectively. Meanwhile, the proposed method inherits simple and efficient merits of LBP, for example, it could extract scale-robust feature for a 200×200 image within 0.24 s, which is applicable for many real-time applications. PMID:26685235

  18. Integration of flow studies for robust selection of mechanoresponsive genes.

    PubMed

    Maimari, Nataly; Pedrigi, Ryan M; Russo, Alessandra; Broda, Krysia; Krams, Rob

    2016-03-01

    Blood flow is an essential contributor to plaque growth, composition and initiation. It is sensed by endothelial cells, which react to blood flow by expressing > 1000 genes. The sheer number of genes implies that one needs genomic techniques to unravel their response in disease. Individual genomic studies have been performed but lack sufficient power to identify subtle changes in gene expression. In this study, we investigated whether a systematic meta-analysis of available microarray studies can improve their consistency. We identified 17 studies using microarrays, of which six were performed in vivo and 11 in vitro. The in vivo studies were disregarded due to the lack of the shear profile. Of the in vitro studies, a cross-platform integration of human studies (HUVECs in flow cells) showed high concordance (> 90 %). The human data set identified > 1600 genes to be shear responsive, more than any other study and in this gene set all known mechanosensitive genes and pathways were present. A detailed network analysis indicated a power distribution (e. g. the presence of hubs), without a hierarchical organisation. The average cluster coefficient was high and further analysis indicated an aggregation of 3 and 4 element motifs, indicating a high prevalence of feedback and feed forward loops, similar to prokaryotic cells. In conclusion, this initial study presented a novel method to integrate human-based mechanosensitive studies to increase its power. The robust network was large, contained all known mechanosensitive pathways and its structure revealed hubs, and a large aggregate of feedback and feed forward loops. PMID:26842798

  19. Multiple trait genomic selection methods increase genetic value prediction accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection predicts genetic values with genome-wide markers. It is rapidly emerging in plant breeding and is widely implemented in animal breeding. Genetic correlations between quantitative traits are pervasive in many breeding programs. These correlations indicate that measurements of one tr...

  20. Selecting Reliable and Robust Freshwater Macroalgae for Biomass Applications

    PubMed Central

    Lawton, Rebecca J.; de Nys, Rocky; Paul, Nicholas A.

    2013-01-01

    Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m−2 day−1), lowest ash content (3–8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO2 across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential

  1. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  2. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    SciTech Connect

    PORTER, REID B.; LOVELAND, ROHAN; ROSTEN, ED

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  3. Selection Strategies for Univariate Loglinear Smoothing Models and Their Effect on Equating Function Accuracy

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul W.

    2009-01-01

    In this study, we compared 12 statistical strategies proposed for selecting loglinear models for smoothing univariate test score distributions and for enhancing the stability of equipercentile equating functions. The major focus was on evaluating the effects of the selection strategies on equating function accuracy. Selection strategies' influence…

  4. Enhancement of the accuracy of the ( P-ω) method through the implementation of a nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Kfoury, G. A.; Chalhoub, N. G.; Henein, N. A.; Bryzik, W.

    2006-04-01

    The ( P-ω) method is a model-based approach developed for determining the instantaneous friction torque in internal combustion engines. This scheme requires measurements of the cylinder gas pressure, the engine load torque, the crankshaft angular displacement and its time derivatives. The effects of the higher order dynamics of the crank-slider mechanism on the measured angular motion of the crankshaft have caused the ( P-ω) method to yield erroneous results, especially, at high engine speeds. To alleviate this problem, a nonlinear sliding mode observer has been developed herein to accurately estimate the rigid and flexible motions of the piston-assembly/connecting-rod/crankshaft mechanism of a single cylinder engine. The observer has been designed to yield a robust performance in the presence of disturbances and modeling imprecision. The digital simulation results, generated under transient conditions representing a decrease in the engine speed, have illustrated the rapid convergence of the estimated state variables to the actual ones in the presence of both structured and unstructured uncertainties. Moreover, this study has proven that the use of the estimated rather than the measured angular displacement of the crankshaft and its time derivatives can significantly improve the accuracy of the ( P-ω) method in determining the instantaneous engine friction torque.

  5. Some scale-free networks could be robust under selective node attacks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Huang, Dan; Li, Deyi; Chen, Guisheng; Lan, Wenfei

    2011-04-01

    It is a mainstream idea that scale-free network would be fragile under the selective attacks. Internet is a typical scale-free network in the real world, but it never collapses under the selective attacks of computer viruses and hackers. This phenomenon is different from the deduction of the idea above because this idea assumes the same cost to delete an arbitrary node. Hence this paper discusses the behaviors of the scale-free network under the selective node attack with different cost. Through the experiments on five complex networks, we show that the scale-free network is possibly robust under the selective node attacks; furthermore, the more compact the network is, and the larger the average degree is, then the more robust the network is; with the same average degrees, the more compact the network is, the more robust the network is. This result would enrich the theory of the invulnerability of the network, and can be used to build robust social, technological and biological networks, and also has the potential to find the target of drugs.

  6. Robust cyclohexanone selective chemiresistors based on single-walled carbon nanotubes.

    PubMed

    Frazier, Kelvin M; Swager, Timothy M

    2013-08-01

    Functionalized single-walled carbon nanotube (SWCNT)-based chemiresistors are reported for a highly robust and sensitive gas sensor to selectively detect cyclohexanone, a target analyte for explosive detection. The trifunctional selector has three important properties: it noncovalently functionalizes SWCNTs with cofacial π-π interactions, it binds to cyclohexanone via hydrogen bond (mechanistic studies were investigated), and it improves the overall robustness of SWCNT-based chemiresistors (e.g., humidity and heat). Our sensors produced reversible and reproducible responses in less than 30 s to 10 ppm of cyclohexanone and displayed an average theoretical limit of detection (LOD) of 5 ppm. PMID:23886453

  7. Evidence of a Direct Evolutionary Selection for Strong Folding and Mutational Robustness Within HIV Coding Regions.

    PubMed

    Goz, Eli; Tuller, Tamir

    2016-08-01

    A large number of studies demonstrated the importance of different HIV RNA structural elements at all stages of the viral life cycle. Nevertheless, the significance of many of these structures is unknown, and plausibly new regions containing RNA structure-mediated regulatory signals remain to be identified. An important characteristic of genomic regions carrying functionally significant secondary structures is their mutational robustness, that is, the extent to which a sequence remains constant in spite of despite mutations in terms of its underlying secondary structure. Structural robustness to mutations is expected to be important in the case of functional RNA structures in viruses with high mutation rate; it may prevent fitness loss due to disruption of possibly functional conformations, pointing to the specific significance of the corresponding genomic region. In the current work, we perform a genome-wide computational analysis to detect signals of a direct evolutionary selection for strong folding and RNA structure-based mutational robustness within HIV coding sequences. We provide evidence that specific regions of HIV structural genes undergo an evolutionary selection for strong folding; in addition, we demonstrate that HIV Rev responsive element seems to undergo a direct evolutionary selection for increased secondary structure robustness to point mutations. We believe that our analysis may enable a better understanding of viral evolutionary dynamics at the RNA structural level and may benefit to practical efforts of engineering antiviral vaccines and novel therapeutic approaches. PMID:27347769

  8. Simulation-based planning for peacekeeping operations: selection of robust plans

    NASA Astrophysics Data System (ADS)

    Cekova, Cvetelina; Chandrasekaran, B.; Josephson, John; Pantaleev, Aleksandar

    2006-05-01

    This research is part of a proposed shift in emphasis in decision support from optimality to robustness. Computer simulation is emerging as a useful tool in planning courses of action (COAs). Simulations require domain models, but there is an inevitable gap between models and reality - some aspects of reality are not represented at all, and what is represented may contain errors. As models are aggregated from multiple sources, the decision maker is further insulated from even an awareness of model weaknesses. To realize the full power of computer simluations to support decision making, decision support systems should support the planner in exporing the robustness of COAs in the face of potential weaknesses in simulation models. This paper demonstrates a method of exploring the robustness of a COA with respect to specific model assumptions about whose accuracy the decision maker might have concerns. The domain is that of peacekeeping in a country where three differenct demographic groups co-exist in tension. An external peacekeeping force strives to achieve stability, an improved economy, and a higher degree of democracy in the country. A proposed COA for such a force is simluated multiple times while varying the assumptions. A visual data analysis tool is used to explore COA robustness. The aim is to help the decision maker choose a COA that is likely to be successful even in the face of potential errors in the assumptions in the models.

  9. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    PubMed

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs. PMID:23031511

  10. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    NASA Astrophysics Data System (ADS)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  11. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  12. A Robust Supervised Variable Selection for Noisy High-Dimensional Data

    PubMed Central

    Kalina, Jan; Schlenker, Anna

    2015-01-01

    The Minimum Redundancy Maximum Relevance (MRMR) approach to supervised variable selection represents a successful methodology for dimensionality reduction, which is suitable for high-dimensional data observed in two or more different groups. Various available versions of the MRMR approach have been designed to search for variables with the largest relevance for a classification task while controlling for redundancy of the selected set of variables. However, usual relevance and redundancy criteria have the disadvantages of being too sensitive to the presence of outlying measurements and/or being inefficient. We propose a novel approach called Minimum Regularized Redundancy Maximum Robust Relevance (MRRMRR), suitable for noisy high-dimensional data observed in two groups. It combines principles of regularization and robust statistics. Particularly, redundancy is measured by a new regularized version of the coefficient of multiple correlation and relevance is measured by a highly robust correlation coefficient based on the least weighted squares regression with data-adaptive weights. We compare various dimensionality reduction methods on three real data sets. To investigate the influence of noise or outliers on the data, we perform the computations also for data artificially contaminated by severe noise of various forms. The experimental results confirm the robustness of the method with respect to outliers. PMID:26137474

  13. Accuracy and training population design for genomic selection in elite north american oats

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) is a method to estimate the breeding values of individuals by using markers throughout the genome. We evaluated the accuracies of GS using data from five traits on 446 oat lines genotyped with 1005 Diversity Array Technology (DArT) markers and two GS methods (RR-BLUP and Bayes...

  14. Accuracy and responses of genomic selection on key traits in apple breeding

    PubMed Central

    Muranty, Hélène; Troggio, Michela; Sadok, Inès Ben; Rifaï, Mehdi Al; Auwerkerken, Annemarie; Banchi, Elisa; Velasco, Riccardo; Stevanato, Piergiorgio; van de Weg, W Eric; Di Guardo, Mario; Kumar, Satish; Laurens, François; Bink, Marco C A M

    2015-01-01

    The application of genomic selection in fruit tree crops is expected to enhance breeding efficiency by increasing prediction accuracy, increasing selection intensity and decreasing generation interval. The objectives of this study were to assess the accuracy of prediction and selection response in commercial apple breeding programmes for key traits. The training population comprised 977 individuals derived from 20 pedigreed full-sib families. Historic phenotypic data were available on 10 traits related to productivity and fruit external appearance and genotypic data for 7829 SNPs obtained with an Illumina 20K SNP array. From these data, a genome-wide prediction model was built and subsequently used to calculate genomic breeding values of five application full-sib families. The application families had genotypes at 364 SNPs from a dedicated 512 SNP array, and these genotypic data were extended to the high-density level by imputation. These five families were phenotyped for 1 year and their phenotypes were compared to the predicted breeding values. Accuracy of genomic prediction across the 10 traits reached a maximum value of 0.5 and had a median value of 0.19. The accuracies were strongly affected by the phenotypic distribution and heritability of traits. In the largest family, significant selection response was observed for traits with high heritability and symmetric phenotypic distribution. Traits that showed non-significant response often had reduced and skewed phenotypic variation or low heritability. Among the five application families the accuracies were uncorrelated to the degree of relatedness to the training population. The results underline the potential of genomic prediction to accelerate breeding progress in outbred fruit tree crops that still need to overcome long generation intervals and extensive phenotyping costs. PMID:26744627

  15. Decoding of attentional selection in a cocktail party environment from single-trial EEG is robust to task.

    PubMed

    Lauteslager, Timo; O'Sullivan, James A; Reilly, Richard B; Lalor, Edmund C

    2014-01-01

    Recently it has been shown to be possible to ascertain the target of a subject's attention in a cocktail party environment from single-trial (~60 s) electroencephalography (EEG) data. Specifically, this was shown in the context of a dichotic listening paradigm where subjects were cued to attend to a story in one ear while ignoring a different story in the other and were required to answer questions on both stories. This paradigm resulted in a high decoding accuracy that correlated with task performance across subjects. Here, we extend this finding by showing that the ability to accurately decode attentional selection in a dichotic speech paradigm is robust to the particular attention task at hand. Subjects attended to one of two dichotically presented stories under four task conditions. These conditions required subjects to 1) answer questions on the content of both stories, 2) detect irregular frequency fluctuations in the voice of the attended speaker 3) answer questions on both stories and detect frequency fluctuations in the attended story, and 4) detect target words in the attended story. All four tasks led to high decoding accuracy (~89%). These results offer new possibilities for creating user-friendly brain computer interfaces (BCIs). PMID:25570209

  16. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    PubMed Central

    Mortazavi, Atiyeh; Moattar, Mohammad Hossein

    2016-01-01

    High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI) for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches. PMID:27127506

  17. Multi-generational imputation of single nucleotide polymorphism marker genotypes and accuracy of genomic selection.

    PubMed

    Toghiani, S; Aggrey, S E; Rekaya, R

    2016-07-01

    Availability of high-density single nucleotide polymorphism (SNP) genotyping platforms provided unprecedented opportunities to enhance breeding programmes in livestock, poultry and plant species, and to better understand the genetic basis of complex traits. Using this genomic information, genomic breeding values (GEBVs), which are more accurate than conventional breeding values. The superiority of genomic selection is possible only when high-density SNP panels are used to track genes and QTLs affecting the trait. Unfortunately, even with the continuous decrease in genotyping costs, only a small fraction of the population has been genotyped with these high-density panels. It is often the case that a larger portion of the population is genotyped with low-density and low-cost SNP panels and then imputed to a higher density. Accuracy of SNP genotype imputation tends to be high when minimum requirements are met. Nevertheless, a certain rate of genotype imputation errors is unavoidable. Thus, it is reasonable to assume that the accuracy of GEBVs will be affected by imputation errors; especially, their cumulative effects over time. To evaluate the impact of multi-generational selection on the accuracy of SNP genotypes imputation and the reliability of resulting GEBVs, a simulation was carried out under varying updating of the reference population, distance between the reference and testing sets, and the approach used for the estimation of GEBVs. Using fixed reference populations, imputation accuracy decayed by about 0.5% per generation. In fact, after 25 generations, the accuracy was only 7% lower than the first generation. When the reference population was updated by either 1% or 5% of the top animals in the previous generations, decay of imputation accuracy was substantially reduced. These results indicate that low-density panels are useful, especially when the generational interval between reference and testing population is small. As the generational interval

  18. Traditional and robust vector selection methods for use with similarity based models

    SciTech Connect

    Hines, J. W.; Garvey, D. R.

    2006-07-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  19. A robust sensor-selection method for P300 brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Cecotti, H.; Rivet, B.; Congedo, M.; Jutten, C.; Bertrand, O.; Maby, E.; Mattout, J.

    2011-02-01

    A brain-computer interface (BCI) is a specific type of human-computer interface that enables direct communication between human and computer through decoding of brain activity. As such, event-related potentials like the P300 can be obtained with an oddball paradigm whose targets are selected by the user. This paper deals with methods to reduce the needed set of EEG sensors in the P300 speller application. A reduced number of sensors yields more comfort for the user, decreases installation time duration, may substantially reduce the financial cost of the BCI setup and may reduce the power consumption for wireless EEG caps. Our new approach to select relevant sensors is based on backward elimination using a cost function based on the signal to signal-plus-noise ratio, after some spatial filtering. We show that this cost function selects sensors' subsets that provide a better accuracy in the speller recognition rate during the test sessions than selected subsets based on classification accuracy. We validate our selection strategy on data from 20 healthy subjects.

  20. Robust hyperpolarized (13)C metabolic imaging with selective non-excitation of pyruvate (SNEP).

    PubMed

    Chen, Way Cherng; Teo, Xing Qi; Lee, Man Ying; Radda, George K; Lee, Philip

    2015-08-01

    In vivo metabolic imaging using hyperpolarized [1-(13)C]pyruvate provides localized biochemical information and is particularly useful in detecting early disease changes, as well as monitoring disease progression and treatment response. However, a major limitation of hyperpolarized magnetization is its unrecoverable decay, due not only to T1 relaxation but also to radio-frequency (RF) excitation. RF excitation schemes used in metabolic imaging must therefore be able to utilize available hyperpolarized magnetization efficiently and robustly for the optimal detection of substrate and metabolite activities. In this work, a novel RF excitation scheme called selective non-excitation of pyruvate (SNEP) is presented. This excitation scheme involves the use of a spectral selective RF pulse to specifically exclude the excitation of [1-(13)C]pyruvate, while uniformly exciting the key metabolites of interest (namely [1-(13)C]lactate and [1-(13)C]alanine) and [1-(13)C]pyruvate-hydrate. By eliminating the loss of hyperpolarized [1-(13)C]pyruvate magnetization due to RF excitation, the signal from downstream metabolite pools is increased together with enhanced dynamic range. Simulation results, together with phantom measurements and in vivo experiments, demonstrated the improvement in signal-to-noise ratio (SNR) and the extension of the lifetime of the [1-(13)C]lactate and [1-(13)C]alanine pools when compared with conventional non-spectral selective (NS) excitation. SNEP has also been shown to perform comparably well with multi-band (MB) excitation, yet SNEP possesses distinct advantages, including ease of implementation, less stringent demands on gradient performance, increased robustness to frequency drifts and B0 inhomogeneity as well as easier quantification involving the use of [1-(13)C]pyruvate-hydrate as a proxy for the actual [1-(13)C] pyruvate signal. SNEP is therefore a promising alternative for robust hyperpolarized [1-(13)C]pyruvate metabolic imaging with high

  1. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  2. Video Stabilization Based on Feature Trajectory Augmentation and Selection and Robust Mesh Grid Warping.

    PubMed

    Koh, Yeong Jun; Lee, Chulwoo; Kim, Chang-Su

    2015-12-01

    We propose a video stabilization algorithm, which extracts a guaranteed number of reliable feature trajectories for robust mesh grid warping. We first estimate feature trajectories through a video sequence and transform the feature positions into rolling-free smoothed positions. When the number of the estimated trajectories is insufficient, we generate virtual trajectories by augmenting incomplete trajectories using a low-rank matrix completion scheme. Next, we detect feature points on a large moving object and exclude them so as to stabilize camera movements, rather than object movements. With the selected feature points, we set a mesh grid on each frame and warp each grid cell by moving the original feature positions to the smoothed ones. For robust warping, we formulate a cost function based on the reliability weights of each feature point and each grid cell. The cost function consists of a data term, a structure-preserving term, and a regularization term. By minimizing the cost function, we determine the robust mesh grid warping and achieve the stabilization. Experimental results demonstrate that the proposed algorithm reconstructs videos more stably than the conventional algorithms. PMID:26394425

  3. Striatal indirect pathway contributes to selection accuracy of learned motor actions.

    PubMed

    Nishizawa, Kayo; Fukabori, Ryoji; Okada, Kana; Kai, Nobuyuki; Uchigashima, Motokazu; Watanabe, Masahiko; Shiota, Akira; Ueda, Masatsugu; Tsutsui, Yuji; Kobayashi, Kazuto

    2012-09-26

    The dorsal striatum, which contains the dorsolateral striatum (DLS) and dorsomedial striatum (DMS), integrates the acquisition and implementation of instrumental learning in cooperation with the nucleus accumbens (NAc). The dorsal striatum regulates the basal ganglia circuitry through direct and indirect pathways. The mechanism by which these pathways mediate the learning processes of instrumental actions remains unclear. We investigated how the striatal indirect (striatopallidal) pathway arising from the DLS contributes to the performance of conditional discrimination. Immunotoxin targeting of the striatal neuronal type containing dopamine D(2) receptor in the DLS of transgenic rats resulted in selective, efficient elimination of the striatopallidal pathway. This elimination impaired the accuracy of response selection in a two-choice reaction time task dependent on different auditory stimuli. The impaired response selection was elicited early in the test sessions and was gradually restored as the sessions continued. The restoration from the deficits in auditory discrimination was prevented by excitotoxic lesion of the NAc but not by that of the DMS. In addition, lesion of the DLS mimicked the behavioral consequence of the striatopallidal removal at the early stage of test sessions of discriminative performance. Our results demonstrate that the DLS-derived striatopallidal pathway plays an essential role in the execution of conditional discrimination, showing its contribution to the control of selection accuracy of learned motor responses. The results also suggest the presence of a mechanism that compensates for the learning deficits during the repetitive sessions, at least partly, demanding accumbal function. PMID:23015433

  4. Robust and Ultrasensitive Polymer Membrane-Based Carbonate-Selective Electrodes.

    PubMed

    Mendecki, Lukasz; Fayose, Tolulope; Stockmal, Kelli A; Wei, Jia; Granados-Focil, Sergio; McGraw, Christina M; Radu, Aleksandar

    2015-08-01

    Quantitative analysis of the carbonate species within clinical and environmental samples is highly critical to the advancement of accurate environmental monitoring, disease screening, and personalized medicine. Herein we report the first example of carbonate detection using ultrasensitive ion selective electrodes (ISEs). The low detection limit (LDL) of these electrodes was at least 4 orders of magnitude lower than the best currently existing carbonate sensors. This was achieved by a simple alteration of the sensor's conditioning protocol. This resulted in the reduction of ion fluxes across the membrane interface consequently lowering the LDL to picomolar levels. The proposed ISEs exhibited near-Nernstian potentiometric responses to carbonate ions with a detection limit of 80 pmol L(-1) (5 ppt) and was utilized for direct determination of carbonate in seawater. Moreover, the new methodology has produced electrodes with excellent reproducibility, robustness, and durability. It is anticipated that this approach may form the basis for the development of highly sensitive and robust ion selective electrodes capable of in situ measurements. PMID:26148196

  5. Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions

    PubMed Central

    Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander

    2014-01-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945

  6. Effects Of Touch Key Size And Separation On Menu-Selection Accuracy

    NASA Astrophysics Data System (ADS)

    Beaton, Robert J.; Welman, Novia

    1985-05-01

    Two experiments were performed to assess the effects of touch key design parameters on menu-selection error rates, The first experiment determined that the optimal design consisted of touch keys 10,16-mm high, either 10,16- or 20, 2- wide, and separated vertically by less than 10,16 mm, The second experiment extended the investigation by including the effects of viewing angle, These latter results replicated the first experiment, but also favored the 2012-mm wide key for off-axis viewing conditions, In both experiments, the horizontal separation between touch keys did not affect menu-selection accuracy; however, subjective selection favored a 20.32-mm horizontal separation.

  7. Accuracy of genomic selection in barley breeding programs: a simulation study based on the real SNP data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The aim of this study was to compare the accuracy of genomic selection (i.e., selection based on genome-wide markers) to phenotypic selection through simulations based on real barley SNPs data (1325 SNPs x 863 breeding lines). We simulated 100 QTL at randomly selected SNPs, which were dropped from t...

  8. Relatedness severely impacts accuracy of marker-assisted selection for disease resistance in hybrid wheat

    PubMed Central

    Gowda, M; Zhao, Y; Würschum, T; Longin, C FH; Miedaner, T; Ebmeyer, E; Schachschneider, R; Kazman, E; Schacht, J; Martinant, J-P; Mette, M F; Reif, J C

    2014-01-01

    The accuracy of genomic selection depends on the relatedness between the members of the set in which marker effects are estimated based on evaluation data and the types for which performance is predicted. Here, we investigate the impact of relatedness on the performance of marker-assisted selection for fungal disease resistance in hybrid wheat. A large and diverse mapping population of 1739 elite European winter wheat inbred lines and hybrids was evaluated for powdery mildew, leaf rust and stripe rust resistance in multi-location field trials and fingerprinted with 9 k and 90 k SNP arrays. Comparison of the accuracies of prediction achieved with data sets from the two marker arrays revealed a crucial role for a sufficiently high marker density in genome-wide association mapping. Cross-validation studies using test sets with varying degrees of relationship to the corresponding estimation sets revealed that close relatedness leads to a substantial increase in the proportion of total genotypic variance explained by the identified QTL and consequently to an overoptimistic judgment of the precision of marker-assisted selection. PMID:24346498

  9. Comparative accuracy of the Albedo, transmission and absorption for selected radiative transfer approximations

    NASA Technical Reports Server (NTRS)

    King, M. D.; HARSHVARDHAN

    1986-01-01

    Illustrations of both the relative and absolute accuracy of eight different radiative transfer approximations as a function of optical thickness, solar zenith angle and single scattering albedo are given. Computational results for the plane albedo, total transmission and fractional absorption were obtained for plane-parallel atmospheres composed of cloud particles. These computations, which were obtained using the doubling method, are compared with comparable results obtained using selected radiative transfer approximations. Comparisons were made between asymptotic theory for thick layers and the following widely used two stream approximations: Coakley-Chylek's models 1 and 2, Meador-Weaver, Eddington, delta-Eddington, PIFM and delta-discrete ordinates.

  10. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  11. Detecting recent positive selection with high accuracy and reliability by conditional coalescent tree.

    PubMed

    Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Jin, Li; He, Yungang

    2014-11-01

    Studies of natural selection, followed by functional validation, are shedding light on understanding of genetic mechanisms underlying human evolution and adaptation. Classic methods for detecting selection, such as the integrated haplotype score (iHS) and Fay and Wu's H statistic, are useful for candidate gene searching underlying positive selection. These methods, however, have limited capability to localize causal variants in selection target regions. In this study, we developed a novel method based on conditional coalescent tree to detect recent positive selection by counting unbalanced mutations on coalescent gene genealogies. Extensive simulation studies revealed that our method is more robust than many other approaches against biases due to various demographic effects, including population bottleneck, expansion, or stratification, while not sacrificing its power. Furthermore, our method demonstrated its superiority in localizing causal variants from massive linked genetic variants. The rate of successful localization was about 20-40% higher than that of other state-of-the-art methods on simulated data sets. On empirical data, validated functional causal variants of four well-known positive selected genes were all successfully localized by our method, such as ADH1B, MCM6, APOL1, and HBB. Finally, the computational efficiency of this new method was much higher than that of iHS implementations, that is, 24-66 times faster than the REHH package, and more than 10,000 times faster than the original iHS implementation. These magnitudes make our method suitable for applying on large sequencing data sets. Software can be downloaded from https://github.com/wavefancy/scct. PMID:25135945

  12. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark. PMID:26964185

  13. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Bell, James F., III; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-04-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ~ 3 wt.%. The statistical significance of these improvements was ~ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically fabricated

  14. A robust rerank approach for feature selection and its application to pooling-based GWA studies.

    PubMed

    Liu, Jia-Rou; Kuo, Po-Hsiu; Hung, Hung

    2013-01-01

    Large-p-small-n datasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances in t-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of "rank-over-variable." Techniques of "random subset" and "rerank" are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-n datasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method. PMID:23653667

  15. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection.

    PubMed

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-06-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96. PMID:26812706

  16. Accuracy of genomic selection methods in a standard data set of loblolly pine (Pinus taeda L.).

    PubMed

    Resende, M F R; Muñoz, P; Resende, M D V; Garrick, D J; Fernando, R L; Davis, J M; Jokela, E J; Martin, T A; Peter, G F; Kirst, M

    2012-04-01

    Genomic selection can increase genetic gain per generation through early selection. Genomic selection is expected to be particularly valuable for traits that are costly to phenotype and expressed late in the life cycle of long-lived species. Alternative approaches to genomic selection prediction models may perform differently for traits with distinct genetic properties. Here the performance of four different original methods of genomic selection that differ with respect to assumptions regarding distribution of marker effects, including (i) ridge regression-best linear unbiased prediction (RR-BLUP), (ii) Bayes A, (iii) Bayes Cπ, and (iv) Bayesian LASSO are presented. In addition, a modified RR-BLUP (RR-BLUP B) that utilizes a selected subset of markers was evaluated. The accuracy of these methods was compared across 17 traits with distinct heritabilities and genetic architectures, including growth, development, and disease-resistance properties, measured in a Pinus taeda (loblolly pine) training population of 951 individuals genotyped with 4853 SNPs. The predictive ability of the methods was evaluated using a 10-fold, cross-validation approach, and differed only marginally for most method/trait combinations. Interestingly, for fusiform rust disease-resistance traits, Bayes Cπ, Bayes A, and RR-BLUB B had higher predictive ability than RR-BLUP and Bayesian LASSO. Fusiform rust is controlled by few genes of large effect. A limitation of RR-BLUP is the assumption of equal contribution of all markers to the observed variation. However, RR-BLUP B performed equally well as the Bayesian approaches.The genotypic and phenotypic data used in this study are publically available for comparative analysis of genomic selection prediction models. PMID:22271763

  17. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  18. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  19. Effects of machining accuracy on frequency response properties of thick-screen frequency selective surface

    NASA Astrophysics Data System (ADS)

    Fang, Chunyi; Gao, Jinsong; Xin, Chen

    2012-10-01

    Electromagnetic theory shows that a thick-screen frequency selective surface (FSS) has many advantages in its frequency response characteristics. In addition, it can be used to make a stealth radome. Therefore, we research in detail how machining accuracy affects the frequency response properties of the FSS in the gigahertz range. Specifically, by using the least squares method applied to machining data, the effects of different machining precision in the samples can be calculated thus obtaining frequency response curves which were verified by testing in the near-field in a microwave dark room. The results show that decreasing roughness and flatness variation leads to an increase in the bandwidth and that an increase in spacing error leads to the center frequency drifting lower. Finally, an increase in aperture error leads to an increase in bandwidth. Therefore, the conclusion is that machining accuracy should be controlled and that a spatial error less than 0.05 mm is required in order to avoid unwanted center frequency drift and a transmittance decrease.

  20. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  1. A Robust Spectrum Sensing Method Based on Maximum Cyclic Autocorrelation Selection for Dynamic Spectrum Access

    NASA Astrophysics Data System (ADS)

    Muraoka, Kazushi; Ariyoshi, Masayuki; Fujii, Takeo

    Spectrum sensing is an important function for dynamic spectrum access (DSA) type cognitive radio systems to detect opportunities for sharing the spectrum with a primary system. The key requirements for spectrum sensing are stability in controlling the probability of false alarm as well as detection performance of the primary signals. However, false alarms can be triggered by noise uncertainty at the secondary devices or unknown interference signals from other secondary systems in realistic radio environments. This paper proposes a robust spectrum sensing method against such uncertainties; it is a kind of cyclostationary feature detection (CFD) approaches. Our proposed method, referred to as maximum cyclic autocorrelation selection (MCAS), compares the peak and non-peak values of the cyclic autocorrelation function (CAF) to detect primary signals, where the non-peak value is the CAF value calculated at cyclic frequencies between the peaks. In MCAS, the desired probability of false alarm can be obtained by setting the number of the non-peak values. In addition, the multiple peak values are combined in MCAS to obtain noise reduction effect and coherent combining gain. Through computer simulations, we show that MCAS can control the probability of false alarm under the condition of noise uncertainty and interference. Furthermore, our method achieves better performance with much less computational complexity in comparison to conventional CFD methods.

  2. A robust binary supramolecular organic framework (SOF) with high CO2 adsorption and selectivity.

    PubMed

    Lü, Jian; Perez-Krap, Cristina; Suyetin, Mikhail; Alsmail, Nada H; Yan, Yong; Yang, Sihai; Lewis, William; Bichoutskaia, Elena; Tang, Chiu C; Blake, Alexander J; Cao, Rong; Schröder, Martin

    2014-09-17

    A robust binary hydrogen-bonded supramolecular organic framework (SOF-7) has been synthesized by solvothermal reaction of 1,4-bis-(4-(3,5-dicyano-2,6-dipyridyl)dihydropyridyl)benzene (1) and 5,5'-bis-(azanediyl)-oxalyl-diisophthalic acid (2). Single crystal X-ray diffraction analysis shows that SOF-7 comprises 2 and 1,4-bis-(4-(3,5-dicyano-2,6-dipyridyl)pyridyl)benzene (3); the latter formed in situ from the oxidative dehydrogenation of 1. SOF-7 shows a three-dimensional four-fold interpenetrated structure with complementary O-H···N hydrogen bonds to form channels that are decorated with cyano and amide groups. SOF-7 exhibits excellent thermal stability and solvent and moisture durability as well as permanent porosity. The activated desolvated material SOF-7a shows high CO2 adsorption capacity and selectivity compared with other porous organic materials assembled solely through hydrogen bonding. PMID:25184689

  3. A Robust Binary Supramolecular Organic Framework (SOF) with High CO2 Adsorption and Selectivity

    PubMed Central

    2014-01-01

    A robust binary hydrogen-bonded supramolecular organic framework (SOF-7) has been synthesized by solvothermal reaction of 1,4-bis-(4-(3,5-dicyano-2,6-dipyridyl)dihydropyridyl)benzene (1) and 5,5′-bis-(azanediyl)-oxalyl-diisophthalic acid (2). Single crystal X-ray diffraction analysis shows that SOF-7 comprises 2 and 1,4-bis-(4-(3,5-dicyano-2,6-dipyridyl)pyridyl)benzene (3); the latter formed in situ from the oxidative dehydrogenation of 1. SOF-7 shows a three-dimensional four-fold interpenetrated structure with complementary O–H···N hydrogen bonds to form channels that are decorated with cyano and amide groups. SOF-7 exhibits excellent thermal stability and solvent and moisture durability as well as permanent porosity. The activated desolvated material SOF-7a shows high CO2 adsorption capacity and selectivity compared with other porous organic materials assembled solely through hydrogen bonding. PMID:25184689

  4. The Effects of Various Item Selection Methods on the Classification Accuracy and Classification Consistency of Criterion-Referenced Instruments.

    ERIC Educational Resources Information Center

    Smith, Douglas U.

    This study examined the effects of certain item selection methods on the classification accuracy and classification consistency of criterion-referenced instruments. Three item response data sets, representing varying situations of instructional effectiveness, were simulated. Five methods of item selection were then applied to each data set for the…

  5. Robust selection of cancer survival signatures from high-throughput genomic data using two-fold subsampling.

    PubMed

    Lee, Sangkyun; Rahnenführer, Jörg; Lang, Michel; De Preter, Katleen; Mestdagh, Pieter; Koster, Jan; Versteeg, Rogier; Stallings, Raymond L; Varesio, Luigi; Asgharzadeh, Shahab; Schulte, Johannes H; Fielitz, Kathrin; Schwermer, Melanie; Morik, Katharina; Schramm, Alexander

    2014-01-01

    Identifying relevant signatures for clinical patient outcome is a fundamental task in high-throughput studies. Signatures, composed of features such as mRNAs, miRNAs, SNPs or other molecular variables, are often non-overlapping, even though they have been identified from similar experiments considering samples with the same type of disease. The lack of a consensus is mostly due to the fact that sample sizes are far smaller than the numbers of candidate features to be considered, and therefore signature selection suffers from large variation. We propose a robust signature selection method that enhances the selection stability of penalized regression algorithms for predicting survival risk. Our method is based on an aggregation of multiple, possibly unstable, signatures obtained with the preconditioned lasso algorithm applied to random (internal) subsamples of a given cohort data, where the aggregated signature is shrunken by a simple thresholding strategy. The resulting method, RS-PL, is conceptually simple and easy to apply, relying on parameters automatically tuned by cross validation. Robust signature selection using RS-PL operates within an (external) subsampling framework to estimate the selection probabilities of features in multiple trials of RS-PL. These probabilities are used for identifying reliable features to be included in a signature. Our method was evaluated on microarray data sets from neuroblastoma, lung adenocarcinoma, and breast cancer patients, extracting robust and relevant signatures for predicting survival risk. Signatures obtained by our method achieved high prediction performance and robustness, consistently over the three data sets. Genes with high selection probability in our robust signatures have been reported as cancer-relevant. The ordering of predictor coefficients associated with signatures was well-preserved across multiple trials of RS-PL, demonstrating the capability of our method for identifying a transferable consensus signature

  6. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    PubMed

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-04-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  7. Increased Prediction Accuracy in Wheat Breeding Trials Using a Marker × Environment Interaction Genomic Selection Model

    PubMed Central

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P.; Autrique, Enrique; de los Campos, Gustavo

    2015-01-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT’s research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis

  8. Resource Allocation for Maximizing Prediction Accuracy and Genetic Gain of Genomic Selection in Plant Breeding: A Simulation Experiment

    PubMed Central

    Lorenz, Aaron J.

    2013-01-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  9. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    PubMed

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  10. Accuracy of genomic selection for age at puberty in a multi-breed population of tropically adapted beef cattle.

    PubMed

    Farah, M M; Swan, A A; Fortes, M R S; Fonseca, R; Moore, S S; Kelly, M J

    2016-02-01

    Genomic selection is becoming a standard tool in livestock breeding programs, particularly for traits that are hard to measure. Accuracy of genomic selection can be improved by increasing the quantity and quality of data and potentially by improving analytical methods. Adding genotypes and phenotypes from additional breeds or crosses often improves the accuracy of genomic predictions but requires specific methodology. A model was developed to incorporate breed composition estimated from genotypes into genomic selection models. This method was applied to age at puberty data in female beef cattle (as estimated from age at first observation of a corpus luteum) from a mix of Brahman and Tropical Composite beef cattle. In this dataset, the new model incorporating breed composition did not increase the accuracy of genomic selection. However, the breeding values exhibited slightly less bias (as assessed by deviation of regression of phenotype on genomic breeding values from the expected value of 1). Adding additional Brahman animals to the Tropical Composite analysis increased the accuracy of genomic predictions and did not affect the accuracy of the Brahman predictions. PMID:26490440

  11. Robust fetal QRS detection from noninvasive abdominal electrocardiogram based on channel selection and simultaneous multichannel processing.

    PubMed

    Ghaffari, Ali; Mollakazemi, Mohammad Javad; Atyabi, Seyyed Abbas; Niknazar, Mohammad

    2015-12-01

    The purpose of this study is to provide a new method for detecting fetal QRS complexes from non-invasive fetal electrocardiogram (fECG) signal. Despite most of the current fECG processing methods which are based on separation of fECG from maternal ECG (mECG), in this study, fetal heart rate (FHR) can be extracted with high accuracy without separation of fECG from mECG. Furthermore, in this new approach thoracic channels are not necessary. These two aspects have reduced the required computational operations. Consequently, the proposed approach can be efficiently applied to different real-time healthcare and medical devices. In this work, a new method is presented for selecting the best channel which carries strongest fECG. Each channel is scored based on two criteria of noise distribution and good fetal heartbeat visibility. Another important aspect of this study is the simultaneous and combinatorial use of available fECG channels via the priority given by their scores. A combination of geometric features and wavelet-based techniques was adopted to extract FHR. Based on fetal geometric features, fECG signals were divided into three categories, and different strategies were employed to analyze each category. The method was validated using three datasets including Noninvasive fetal ECG database, DaISy and PhysioNet/Computing in Cardiology Challenge 2013. Finally, the obtained results were compared with other studies. The adopted strategies such as multi-resolution analysis, not separating fECG and mECG, intelligent channels scoring and using them simultaneously are the factors that caused the promising performance of the method. PMID:26462679

  12. The Effects of Demography and Long-Term Selection on the Accuracy of Genomic Prediction with Sequence Data

    PubMed Central

    MacLeod, Iona M.; Hayes, Ben J.; Goddard, Michael E.

    2014-01-01

    The use of dense SNPs to predict the genetic value of an individual for a complex trait is often referred to as “genomic selection” in livestock and crops, but is also relevant to human genetics to predict, for example, complex genetic disease risk. The accuracy of prediction depends on the strength of linkage disequilibrium (LD) between SNPs and causal mutations. If sequence data were used instead of dense SNPs, accuracy should increase because causal mutations are present, but demographic history and long-term negative selection also influence accuracy. We therefore evaluated genomic prediction, using simulated sequence in two contrasting populations: one reducing from an ancestrally large effective population size (Ne) to a small one, with high LD common in domestic livestock, while the second had a large constant-sized Ne with low LD similar to that in some human or outbred plant populations. There were two scenarios in each population; causal variants were either neutral or under long-term negative selection. For large Ne, sequence data led to a 22% increase in accuracy relative to ∼600K SNP chip data with a Bayesian analysis and a more modest advantage with a BLUP analysis. This advantage increased when causal variants were influenced by negative selection, and accuracy persisted when 10 generations separated reference and validation populations. However, in the reducing Ne population, there was little advantage for sequence even with negative selection. This study demonstrates the joint influence of demography and selection on accuracy of prediction and improves our understanding of how best to exploit sequence for genomic prediction. PMID:25233989

  13. Selecting robust solutions from a trade-off surface through the evaluation of the distribution of parameter sets in objective space and parameter space

    NASA Astrophysics Data System (ADS)

    Dumedah, G.; Berg, A. A.; Wineberg, M.

    2009-12-01

    Hydrological models are increasingly been calibrated using multi-objective genetic algorithms (GAs). Multi-objective GAs facilitate the evaluation of several model evaluation objectives and the examination of massive combinations of parameter sets. Usually, the outcome is a set of several equally-accurate parameter sets which make-up a trade-off surface between the objective functions often referred to as Pareto set. The Pareto set describes a decision-front in a way that each solution has unique values in parameter space with competing accuracy in objective space. An automated framework of choosing a single from such a trade-off surface has not been thoroughly investigated in the model calibration literature. As a result, this presentation will demonstrate an automated selection of robust solutions from a trade-off surface using the distribution of solutions in both objective space and parameter space. The trade-off surface was generated using the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to calibrate the Soil and Water Assessment Tool (SWAT) for streamflow simulation based on model bias and root mean square error. Our selection method generates solutions with unique properties including a representative pathway in parameter space, a basin of attraction or the center of mass in objective space, and a proximity to the origin in objective space. Additionally, our framework determines a robust solution as a balanced compromise for the distribution of solutions in objective space and parameter space. That is, the robust solution emphasizes stability in model parameter values and in objective function values in a way that similarity in parameter space implies similarity in objective space.

  14. Optimal energy window selection of a CZT-based small-animal SPECT for quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Park, Su-Jin; Yu, A. Ram; Choi, Yun Young; Kim, Kyeong Min; Kim, Hee-Joung

    2015-05-01

    Cadmium zinc telluride (CZT)-based small-animal single-photon emission computed tomography (SPECT) has desirable characteristics such as superior energy resolution, but data acquisition for SPECT imaging has been widely performed with a conventional energy window. The aim of this study was to determine the optimal energy window settings for technetium-99 m (99mTc) and thallium-201 (201Tl), the most commonly used isotopes in SPECT imaging, using CZT-based small-animal SPECT for quantitative accuracy. We experimentally investigated quantitative measurements with respect to primary count rate, contrast-to-noise ratio (CNR), and scatter fraction (SF) within various energy window settings using Triumph X-SPECT. The two ways of energy window settings were considered: an on-peak window and an off-peak window. In the on-peak window setting, energy centers were set on the photopeaks. In the off-peak window setting, the ratios of energy differences between the photopeak from the lower- and higher-threshold varied from 4:6 to 3:7. In addition, the energy-window width for 99mTc varied from 5% to 20%, and that for 201Tl varied from 10% to 30%. The results of this study enabled us to determine the optimal energy windows for each isotope in terms of primary count rate, CNR, and SF. We selected the optimal energy window that increases the primary count rate and CNR while decreasing SF. For 99mTc SPECT imaging, the energy window of 138-145 keV with a 5% width and off-peak ratio of 3:7 was determined to be the optimal energy window. For 201Tl SPECT imaging, the energy window of 64-85 keV with a 30% width and off-peak ratio of 3:7 was selected as the optimal energy window. Our results demonstrated that the proper energy window should be carefully chosen based on quantitative measurements in order to take advantage of desirable characteristics of CZT-based small-animal SPECT. These results provided valuable reference information for the establishment of new protocol for CZT

  15. The importance of identity-by-state information for the accuracy of genomic selection

    PubMed Central

    2012-01-01

    Background It is commonly assumed that prediction of genome-wide breeding values in genomic selection is achieved by capitalizing on linkage disequilibrium between markers and QTL but also on genetic relationships. Here, we investigated the reliability of predicting genome-wide breeding values based on population-wide linkage disequilibrium information, based on identity-by-descent relationships within the known pedigree, and to what extent linkage disequilibrium information improves predictions based on identity-by-descent genomic relationship information. Methods The study was performed on milk, fat, and protein yield, using genotype data on 35 706 SNP and deregressed proofs of 1086 Italian Brown Swiss bulls. Genome-wide breeding values were predicted using a genomic identity-by-state relationship matrix and a genomic identity-by-descent relationship matrix (averaged over all marker loci). The identity-by-descent matrix was calculated by linkage analysis using one to five generations of pedigree data. Results We showed that genome-wide breeding values prediction based only on identity-by-descent genomic relationships within the known pedigree was as or more reliable than that based on identity-by-state, which implicitly also accounts for genomic relationships that occurred before the known pedigree. Furthermore, combining the two matrices did not improve the prediction compared to using identity-by-descent alone. Including different numbers of generations in the pedigree showed that most of the information in genome-wide breeding values prediction comes from animals with known common ancestors less than four generations back in the pedigree. Conclusions Our results show that, in pedigreed breeding populations, the accuracy of genome-wide breeding values obtained by identity-by-descent relationships was not improved by identity-by-state information. Although, in principle, genomic selection based on identity-by-state does not require pedigree data, it does use the

  16. Curved Microneedle Array-Based sEMG Electrode for Robust Long-Term Measurements and High Selectivity.

    PubMed

    Kim, Minjae; Kim, Taewan; Kim, Dong Sung; Chung, Wan Kyun

    2015-01-01

    Surface electromyography is widely used in many fields to infer human intention. However, conventional electrodes are not appropriate for long-term measurements and are easily influenced by the environment, so the range of applications of sEMG is limited. In this paper, we propose a flexible band-integrated, curved microneedle array electrode for robust long-term measurements, high selectivity, and easy applicability. Signal quality, in terms of long-term usability and sensitivity to perspiration, was investigated. Its motion-discriminating performance was also evaluated. The results show that the proposed electrode is robust to perspiration and can maintain a high-quality measuring ability for over 8 h. The proposed electrode also has high selectivity for motion compared with a commercial wet electrode and dry electrode. PMID:26153773

  17. Curved Microneedle Array-Based sEMG Electrode for Robust Long-Term Measurements and High Selectivity

    PubMed Central

    Kim, Minjae; Kim, Taewan; Kim, Dong Sung; Chung, Wan Kyun

    2015-01-01

    Surface electromyography is widely used in many fields to infer human intention. However, conventional electrodes are not appropriate for long-term measurements and are easily influenced by the environment, so the range of applications of sEMG is limited. In this paper, we propose a flexible band-integrated, curved microneedle array electrode for robust long-term measurements, high selectivity, and easy applicability. Signal quality, in terms of long-term usability and sensitivity to perspiration, was investigated. Its motion-discriminating performance was also evaluated. The results show that the proposed electrode is robust to perspiration and can maintain a high-quality measuring ability for over 8 h. The proposed electrode also has high selectivity for motion compared with a commercial wet electrode and dry electrode. PMID:26153773

  18. Expertise Effects in Face-Selective Areas are Robust to Clutter and Diverted Attention, but not to Competition.

    PubMed

    McGugin, Rankin Williams; Van Gulick, Ana E; Tamber-Rosenau, Benjamin J; Ross, David A; Gauthier, Isabel

    2015-09-01

    Expertise effects for nonface objects in face-selective brain areas may reflect stable aspects of neuronal selectivity that determine how observers perceive objects. However, bottom-up (e.g., clutter from irrelevant objects) and top-down manipulations (e.g., attentional selection) can influence activity, affecting the link between category selectivity and individual performance. We test the prediction that individual differences expressed as neural expertise effects for cars in face-selective areas are sufficiently stable to survive clutter and manipulations of attention. Additionally, behavioral work and work using event related potentials suggest that expertise effects may not survive competition; we investigate this using functional magnetic resonance imaging. Subjects varying in expertise with cars made 1-back decisions about cars, faces, and objects in displays containing one or 2 objects, with only one category attended. Univariate analyses suggest car expertise effects are robust to clutter, dampened by reducing attention to cars, but nonetheless more robust to manipulations of attention than competition. While univariate expertise effects are severely abolished by competition between cars and faces, multivariate analyses reveal new information related to car expertise. These results demonstrate that signals in face-selective areas predict expertise effects for nonface objects in a variety of conditions, although individual differences may be expressed in different dependent measures depending on task and instructions. PMID:24682187

  19. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  20. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison

    NASA Astrophysics Data System (ADS)

    van de Schoot, A. J. A. J.; Visser, J.; van Kesteren, Z.; Janssen, T. M.; Rasch, C. R. N.; Bel, A.

    2016-02-01

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D99%) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D99%, rectum V30Gy and bladder V40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D99% on average by 0.2 Gy and decreased the median rectum V30Gy and median bladder V40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in terms of

  1. Intercalation Compounds as Inner Reference Electrodes for Reproducible and Robust Solid-Contact Ion-Selective Electrodes.

    PubMed

    Ishige, Yu; Klink, Stefan; Schuhmann, Wolfgang

    2016-04-01

    With billions of assays performed every year, ion-selective electrodes (ISEs) provide a simple and fast technique for clinical analysis of blood electrolytes. The development of cheap, miniaturized solid-contact (SC-)ISEs for integrated systems, however, remains a difficult balancing act between size, robustness, and reproducibility, because the defined interface potentials between the ion-selective membrane and the inner reference electrode (iRE) are often compromised. We demonstrate that target cation-sensitive intercalation compounds, such as partially charged lithium iron phosphate (LFP), can be applied as iREs of the quasi-first kind for ISEs. The symmetrical response of the interface potentials towards target cations ultimately results in ISEs with high robustness towards the inner filling (ca. 5 mV dec(-1) conc.) as well as robust and miniaturized SC-ISEs. They have a predictable and stable potential derived from the LiFePO4 /FePO4 redox couple (97.0±1.5 mV after 42 days). PMID:26971569

  2. Genomic selection accuracy for grain quality traits in biparental wheat populations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) is a promising tool for plant and animal breeding that uses genome wide molecular marker data to capture small and large effect quantitative trait loci and predict the genetic value of selection candidates. Genomic selection has been shown previously to have higher prediction ...

  3. Screening Accuracy of Level 2 Autism Spectrum Disorder Rating Scales: A Review of Selected Instruments

    ERIC Educational Resources Information Center

    Norris, Megan; Lecavalier, Luc

    2010-01-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be…

  4. Tailored selection of study individuals to be sequenced in order to improve the accuracy of genotype imputation.

    PubMed

    Peil, Barbara; Kabisch, Maria; Fischer, Christine; Hamann, Ute; Bermejo, Justo Lorenzo

    2015-02-01

    The addition of sequence data from own-study individuals to genotypes from external data repositories, for example, the HapMap, has been shown to improve the accuracy of imputed genotypes. Early approaches for reference panel selection favored individuals who best reflect recombination patterns in the study population. By contrast, a maximization of genetic diversity in the reference panel has been recently proposed. We investigate here a novel strategy to select individuals for sequencing that relies on the characterization of the ancestral kernel of the study population. The simulated study scenarios consisted of several combinations of subpopulations from HapMap. HapMap individuals who did not belong to the study population constituted an external reference panel which was complemented with the sequences of study individuals selected according to different strategies. In addition to a random choice, individuals with the largest statistical depth according to the first genetic principal components were selected. In all simulated scenarios the integration of sequences from own-study individuals increased imputation accuracy. The selection of individuals based on the statistical depth resulted in the highest imputation accuracy for European and Asian study scenarios, whereas random selection performed best for an African-study scenario. Present findings indicate that there is no universal 'best strategy' to select individuals for sequencing. We propose to use the methodology described in the manuscript to assess the advantage of focusing on the ancestral kernel under own study characteristics (study size, genetic diversity, availability and properties of external reference panels, frequency of imputed variants…). PMID:25537753

  5. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants

    PubMed Central

    Yousef, Malik; Saçar Demirci, Müşerref Duygu; Khalifa, Waleed; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection. PMID:27190509

  6. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants.

    PubMed

    Yousef, Malik; Saçar Demirci, Müşerref Duygu; Khalifa, Waleed; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection. PMID:27190509

  7. Accuracy of initial codon selection by aminoacyl-tRNAs on the mRNA-programmed bacterial ribosome

    PubMed Central

    Zhang, Jingji; Ieong, Ka-Weng; Johansson, Magnus; Ehrenberg, Måns

    2015-01-01

    We used a cell-free system with pure Escherichia coli components to study initial codon selection of aminoacyl-tRNAs in ternary complex with elongation factor Tu and GTP on messenger RNA-programmed ribosomes. We took advantage of the universal rate-accuracy trade-off for all enzymatic selections to determine how the efficiency of initial codon readings decreased linearly toward zero as the accuracy of discrimination against near-cognate and wobble codon readings increased toward the maximal asymptote, the d value. We report data on the rate-accuracy variation for 7 cognate, 7 wobble, and 56 near-cognate codon readings comprising about 15% of the genetic code. Their d values varied about 400-fold in the 200–80,000 range depending on type of mismatch, mismatch position in the codon, and tRNA isoacceptor type. We identified error hot spots (d = 200) for U:G misreading in second and U:U or G:A misreading in third codon position by His-tRNAHis and, as also seen in vivo, Glu-tRNAGlu. We suggest that the proofreading mechanism has evolved to attenuate error hot spots in initial selection such as those found here. PMID:26195797

  8. Ligand similarity guided receptor selection enhances docking accuracy and recall for non-nucleoside HIV reverse transcriptase inhibitors.

    PubMed

    Stanton, Richard A; Nettles, James H; Schinazi, Raymond F

    2015-11-01

    Non-nucleoside reverse transcriptase inhibitors (NNRTI) are allosteric inhibitors of human immunodeficiency virus type 1 (HIV-1) reverse transcriptase (RT), a viral polymerase essential to infection. Despite the availability of >150 NNRTI-bound RT crystal structures, rational design of new NNRTI remains challenging because of the variability of their induced fit, hydrophobic binding patterns. Docking NNRTI yields inconsistent results that vary markedly depending on the receptor structure used, as only 27% of the >20k cross-docking calculations we performed using known NNRTI were accurate. In order to determine if a hospitable receptor for docking could be selected a priori, we evaluated more than 40 chemical descriptors for their ability to pre-select a best receptor for NNRTI cross-docking. The receptor selection was based on similarity scores between the bound- and target-ligands generated by each descriptor. The top descriptors were able to double the probability of cross-docking accuracy over random receptor selection. Additionally, recall of known NNRTI from a large library of similar decoys was increased using the same approach. The results demonstrate the utility of pre-selecting receptors when docking into difficult targets. Graphical Abstract Cross-docking accuracy increases when using chemical descriptors to determine the NNRTI with maximum similarity to the new compound and then docking into its respective receptor. PMID:26450349

  9. Increased prediction accuracy in wheat breeding trials using a marker x environment interaction genomic selection model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates for selection. Originally these models were developed without considering genotype ' environment interaction (GE). Several authors have proposed extensions of the cannonical GS model that accomm...

  10. Increased prediction accuracy in wheat breeding trials using a marker x environment interaction genomic selection model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype x environment interaction (GxE). Several authors have proposed extensions of the single-environment GS model th...

  11. Genomic selection accuracy using multi-family prediction models in a wheat breeding program

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) uses genome-wide molecular marker data to predict the genetic value of selection candidates in breeding programs. In plant breeding, the ability to produce large numbers of progeny per cross allows GS to be conducted within each family. However, this approach requires phenotyp...

  12. A rapid and robust selection procedure for generating drug-selectable marker-free recombinant malaria parasites

    PubMed Central

    Manzoni, Giulia; Briquet, Sylvie; Risco-Castillo, Veronica; Gaultier, Charlotte; Topçu, Selma; Ivănescu, Maria Larisa; Franetich, Jean-François; Hoareau-Coudert, Bénédicte; Mazier, Dominique; Silvie, Olivier

    2014-01-01

    Experimental genetics have been widely used to explore the biology of the malaria parasites. The rodent parasites Plasmodium berghei and less frequently P. yoelii are commonly utilised, as their complete life cycle can be reproduced in the laboratory and because they are genetically tractable via homologous recombination. However, due to the limited number of drug-selectable markers, multiple modifications of the parasite genome are difficult to achieve and require large numbers of mice. Here we describe a novel strategy that combines positive-negative drug selection and flow cytometry-assisted sorting of fluorescent parasites for the rapid generation of drug-selectable marker-free P. berghei and P. yoelii mutant parasites expressing a GFP or a GFP-luciferase cassette, using minimal numbers of mice. We further illustrate how this new strategy facilitates phenotypic analysis of genetically modified parasites by fluorescence and bioluminescence imaging of P. berghei mutants arrested during liver stage development. PMID:24755823

  13. A rapid and robust selection procedure for generating drug-selectable marker-free recombinant malaria parasites.

    PubMed

    Manzoni, Giulia; Briquet, Sylvie; Risco-Castillo, Veronica; Gaultier, Charlotte; Topçu, Selma; Ivănescu, Maria Larisa; Franetich, Jean-François; Hoareau-Coudert, Bénédicte; Mazier, Dominique; Silvie, Olivier

    2014-01-01

    Experimental genetics have been widely used to explore the biology of the malaria parasites. The rodent parasites Plasmodium berghei and less frequently P. yoelii are commonly utilised, as their complete life cycle can be reproduced in the laboratory and because they are genetically tractable via homologous recombination. However, due to the limited number of drug-selectable markers, multiple modifications of the parasite genome are difficult to achieve and require large numbers of mice. Here we describe a novel strategy that combines positive-negative drug selection and flow cytometry-assisted sorting of fluorescent parasites for the rapid generation of drug-selectable marker-free P. berghei and P. yoelii mutant parasites expressing a GFP or a GFP-luciferase cassette, using minimal numbers of mice. We further illustrate how this new strategy facilitates phenotypic analysis of genetically modified parasites by fluorescence and bioluminescence imaging of P. berghei mutants arrested during liver stage development. PMID:24755823

  14. Impact of feature selection on the accuracy and spatial uncertainty of per-field crop classification using Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Löw, F.; Michel, U.; Dech, S.; Conrad, C.

    2013-11-01

    Crop mapping is one major component of agricultural resource monitoring using remote sensing. Yield or water demand modeling requires that both, the total surface that is cultivated and the accurate distribution of crops, respectively is known. Map quality is crucial and influences the model outputs. Although the use of multi-spectral time series data in crop mapping has been acknowledged, the potentially high dimensionality of the input data remains an issue. In this study Support Vector Machines (SVM) are used for crop classification in irrigated landscapes at the object-level. Input to the classifications is 71 multi-seasonal spectral and geostatistical features computed from RapidEye time series. The random forest (RF) feature importance score was used to select a subset of features that achieved optimal accuracies. The relationship between the hard result accuracy and the soft output from the SVM is investigated by employing two measures of uncertainty, the maximum a posteriori probability and the alpha quadratic entropy. Specifically the effect of feature selection on map uncertainty is investigated by looking at the soft outputs of the SVM, in addition to classical accuracy metrics. Overall the SVMs applied to the reduced feature subspaces that were composed of the most informative multi-seasonal features led to a clear increase in classification accuracy up to 4.3%, and to a significant decline in thematic uncertainty. SVM was shown to be affected by feature space size and could benefit from RF-based feature selection. Uncertainty measures from SVM are an informative source of information on the spatial distribution of error in the crop maps.

  15. The effect of tray selection on the accuracy of elastomeric impression materials.

    PubMed

    Gordon, G E; Johnson, G H; Drennon, D G

    1990-01-01

    This study evaluated the accuracy of reproduction of stone casts made from impressions using different tray and impression materials. The tray materials used were an acrylic resin, a thermoplastic, and a plastic. The impression materials used were an additional silicone, a polyether, and a polysulfide. Impressions were made of a stainless steel master die that simulated crown preparations for a fixed partial denture and an acrylic resin model with cross-arch and anteroposterior landmarks in stainless steel that typify clinical intra-arch distances. Impressions of the fixed partial denture simulation were made with all three impression materials and all three tray types. Impressions of the cross-arch and anteroposterior landmarks were made by using all three tray types with only the addition reaction silicone impression material. Impressions were poured at 1 hour with a type IV dental stone. Data were analyzed by using ANOVA with a sample size of five. Results indicated that custom-made trays of acrylic resin and the thermoplastic material performed similarly regarding die accuracy and produced clinically acceptable casts. The stock plastic tray consistently produced casts with greater dimensional change than the two custom trays. PMID:2404101

  16. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...

  17. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genome-wide molecular markers are readily being applied to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorp...

  18. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Sequence-based genotyping methods can generate de novo large numbers of markers. However, without a reference genome, these markers are unordered and typically have a lar...

  19. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  20. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  1. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-01-01

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. PMID:26985961

  2. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  3. Screening accuracy of Level 2 autism spectrum disorder rating scales. A review of selected instruments.

    PubMed

    Norris, Megan; Lecavalier, Luc

    2010-07-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be ASD-specific, and have published evidence of diagnostic validity in peer-reviewed journals. The five scales reviewed were: the Social Communication Questionnaire (SCQ), Gilliam Autism Rating Scale/Gilliam Autism Rating Scale-Second Edition (GARS/GARS-2), Social Responsiveness Scale (SRS), Autism Spectrum Screening Questionnaire (ASSQ), and Asperger Syndrome Diagnostic Scale (ASDS). Twenty total studies were located, most examining the SCQ. Research on the other scales was limited. Comparisons between scales were few and available evidence of diagnostic validity is scarce for certain subpopulations (e.g., lower functioning individuals, PDDNOS). Overall, the SCQ performed well, the SRS and ASSQ showed promise, and the GARS/GARS-2 and ASDS demonstrated poor sensitivity. This review indicates that Level 2 ASD caregiver-completed rating scales are in need of much more scientific scrutiny. PMID:20591956

  4. Emerging feed-forward inhibition allows the robust formation of direction selectivity in the developing ferret visual cortex

    PubMed Central

    Escobar, Gina M.; Maffei, Arianna; Miller, Paul

    2014-01-01

    The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition. PMID:24598528

  5. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  6. Facilitating the selection and creation of accurate interatomic potentials with robust tools and characterization

    NASA Astrophysics Data System (ADS)

    Trautt, Zachary T.; Tavazza, Francesca; Becker, Chandler A.

    2015-10-01

    The Materials Genome Initiative seeks to significantly decrease the cost and time of development and integration of new materials. Within the domain of atomistic simulations, several roadblocks stand in the way of reaching this goal. While the NIST Interatomic Potentials Repository hosts numerous interatomic potentials (force fields), researchers cannot immediately determine the best choice(s) for their use case. Researchers developing new potentials, specifically those in restricted environments, lack a comprehensive portfolio of efficient tools capable of calculating and archiving the properties of their potentials. This paper elucidates one solution to these problems, which uses Python-based scripts that are suitable for rapid property evaluation and human knowledge transfer. Calculation results are visible on the repository website, which reduces the time required to select an interatomic potential for a specific use case. Furthermore, property evaluation scripts are being integrated with modern platforms to improve discoverability and access of materials property data. To demonstrate these scripts and features, we will discuss the automation of stacking fault energy calculations and their application to additional elements. While the calculation methodology was developed previously, we are using it here as a case study in simulation automation and property calculations. We demonstrate how the use of Python scripts allows for rapid calculation in a more easily managed way where the calculations can be modified, and the results presented in user-friendly and concise ways. Additionally, the methods can be incorporated into other efforts, such as openKIM.

  7. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  8. Effect of wavelength selection on the accuracy of blood oxygen saturation estimates obtained from photoacoustic images

    NASA Astrophysics Data System (ADS)

    Hochuli, Roman; Beard, Paul C.; Cox, Ben

    2015-03-01

    In photoacoustic tomography (PAT) the image contrast is due to optical absorption, and because of this PAT images are sensitive to changes in blood oxygen saturation (sO2). However, this is not a linear relationship due to the presence of a non-uniform light fluence distribution. In this paper we systematically evaluate the conditions in which an approximate linear inversion scheme-which assumes the internal fluence distribution is unchanged when the absorption coefficient changes-can give accurate estimates of sO2. A numerical phantom of highly vascularised tissue is used to test this assumption. It is shown that using multiple wavelengths over a broad range of the near-infrared spectrum yields inaccurate estimates of oxygenation, while a careful selection of wavelengths in the 620-920nm range is likely to yield more accurate oxygenation values. We demonstrate that a 1D fluence correction obtained by fitting a linear function to the average decay rate in the image can further improve the estimates. However, opting to use these longer wavelengths involves sacrificing signal-to-noise ratio in the image, as the absorption of blood is low in this range. This results in an inherent trade-off between error in the sO2 estimates due to fluence variation and error due to noise. This study shows that the depth to which sO2 can be estimated accurately using a linear approximation is limited in vivo, even with idealised measurements, to at most 3mm. In practice, there will be even greater uncertainties affecting the estimates, e.g., due to bandlimited or partial-view acoustic detection.

  9. A New Class of Advanced Accuracy Satellite Instrumentation (AASI) for the CLARREO Mission: Interferometer Test-bed Tradestudies and Selection

    NASA Astrophysics Data System (ADS)

    Taylor, J. K.; Revercomb, H. E.; Grandmont, F. J.; Buijs, H.; Gero, P. J.; Best, F. A.; Tobin, D. C.; Knuteson, R. O.; Laporte, D. D.

    2009-12-01

    NASA has selected CLARREO (Climate Absolute Radiance and Refractivity Observatory), a climate mission recommended by the 2007 Decadal Survey of the US National Research Council, as a potential new start in 2010. CLARREO will measure spectrally resolved radiance from the earth and atmospheric bending of GPS signals related to atmospheric structure (refractivity) as benchmark measurements of long-term climate change trends. CLARREO will provide more complete spectral and time-of-day coverage and will fly basic physical standards to eliminate the need to assume on-board reference stability. Therefore, the spectral radiances from this mission will also serve as benchmarks to propagate a highly accurate calibration to other space-borne IR instruments. Technology development and risk reduction for the CLARREO mission is being conducted at the Space Science and Engineering Center at the University of Wisconsin-Madison. The objective of this work is to develop and demonstrate the technology necessary to measure IR spectrally resolved radiances (3 - 50 micrometers) with ultra high accuracy (< 0.1 K 3-sigma brightness temperature at scene temperature) for the CLARREO benchmark climate mission. The ultimate benefit to society is irrefutable quantification of climate change and a solid basis for improving climate model forecasts. The proposed work (University of Wisconsin-Madison and Harvard University) was selected for the 2007 NASA Instrument Incubator Program (IIP) and will develop four primary technologies to assure SI traceability on-orbit and demonstrate the ultra high accuracy measurement capability required for CLARREO: (1) On-orbit Absolute Radiance Standard (OARS), a high emissivity blackbody source that uses multiple miniature phase-change cells to provide a revolutionary on-orbit standard with absolute temperature accuracy proven over a wide range of temperatures, (2) On-orbit Cavity Emissivity Modules (OCEMs), providing a source (quantum cascade laser, QCL, or

  10. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  11. Orthogonal Selection and Fixing of Coordination Self-Assembly Pathways for Robust Metallo-organic Ensemble Construction.

    PubMed

    Burke, Michael J; Nichol, Gary S; Lusby, Paul J

    2016-07-27

    Supramolecular construction strategies have overwhelmingly relied on the principles of thermodynamic control. While this approach has yielded an incredibly diverse and striking collection of ensembles, there are downsides, most obviously the necessity to trade-off reversibility against structural integrity. Herein we describe an alternative "assembly-followed-by-fixing" approach that possesses the high-yielding, atom-efficient advantages of reversible self-assembly reactions, yet gives structures that possess a covalent-like level of kinetic robustness. We have chosen to exemplify these principles in the preparation of a series of M2L3 helicates and M4L6 tetrahedra. While the rigidity of various bis(bidentate) ligands causes the larger species to be energetically preferred, we are able to freeze the self-assembly process under "non-ambient" conditions, to selectivity give the disfavored M2L3 helicates. We also demonstrate "kinetic-stimuli" (redox and light)-induced switching between architectures, notably reconstituting the lower energy tetrahedra into highly distorted helicates. PMID:27351912

  12. Atrial-like cardiomyocytes from human pluripotent stem cells are a robust preclinical model for assessing atrial-selective pharmacology

    PubMed Central

    Devalla, Harsha D; Schwach, Verena; Ford, John W; Milnes, James T; El-Haou, Said; Jackson, Claire; Gkatzis, Konstantinos; Elliott, David A; Chuva de Sousa Lopes, Susana M; Mummery, Christine L; Verkerk, Arie O; Passier, Robert

    2015-01-01

    Drugs targeting atrial-specific ion channels, Kv1.5 or Kir3.1/3.4, are being developed as new therapeutic strategies for atrial fibrillation. However, current preclinical studies carried out in non-cardiac cell lines or animal models may not accurately represent the physiology of a human cardiomyocyte (CM). In the current study, we tested whether human embryonic stem cell (hESC)-derived atrial CMs could predict atrial selectivity of pharmacological compounds. By modulating retinoic acid signaling during hESC differentiation, we generated atrial-like (hESC-atrial) and ventricular-like (hESC-ventricular) CMs. We found the expression of atrial-specific ion channel genes, KCNA5 (encoding Kv1.5) and KCNJ3 (encoding Kir 3.1), in hESC-atrial CMs and further demonstrated that these ion channel genes are regulated by COUP-TF transcription factors. Moreover, in response to multiple ion channel blocker, vernakalant, and Kv1.5 blocker, XEN-D0101, hESC-atrial but not hESC-ventricular CMs showed action potential (AP) prolongation due to a reduction in early repolarization. In hESC-atrial CMs, XEN-R0703, a novel Kir3.1/3.4 blocker restored the AP shortening caused by CCh. Neither CCh nor XEN-R0703 had an effect on hESC-ventricular CMs. In summary, we demonstrate that hESC-atrial CMs are a robust model for pre-clinical testing to assess atrial selectivity of novel antiarrhythmic drugs. PMID:25700171

  13. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  14. Accuracy and reproducibility of the Oxi/Ferm system in identifying a select group of unusual gram-negative bacilli.

    PubMed Central

    Nadler, H; George, H; Barr, J

    1979-01-01

    The Oxi/Ferm (O/F) identification system was compared in a double-blind study to a conventional test battery for the characterization of 96 reference and clinical strains consisting of 83 nonfermentative and 13 oxidase-producing, fermentative gram-negative bacilli. The O/F tube and supplemental tests correctly identified 84% of the nonfermentative and 77% of the oxidase-producing, fermentative bacilli. However, when the supplemental tests were excluded and the biochemical profiles generated by all nine O/F tube reactions were examined, the profile accuracy reached 95% (79 of 83) for the nonfermentative and 93% (12 of 13) for oxidase-producing, fermentative bacilli. Seven of the nine O/F substrate reactions demonstrated less than or equal to 89% agreement with conventional reactions, whereas the urea and arginine reactions provided 82 and 85% agreement, respectively. Replicate O/F tests with six selected organisms demonstrated 97% identification reproducibility and 84% overall substrate reproducibility. The mean O/F identification time was 2.6 days as compared to 3.3 days for the conventional system. Although this study suggests that the O/F system is a convenient, rapid, and accurate alternative to conventional identification methods, several modifications are recommended. PMID:372222

  15. Mechanisms for Robust Cognition.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A

    2015-08-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. PMID:25352094

  16. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  17. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers

    PubMed Central

    Sehgal, Vasudha; Seviour, Elena G.; Moss, Tyler J.; Mills, Gordon B.; Azencott, Robert; Ram, Prahlad T.

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases. PMID:26505200

  18. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases. PMID:26505200

  19. Comparison of Accuracy and Efficiency of Directed Scanning and Group-Item Scanning for Augmentative Communication Selection Techniques with Typically Developing Preschoolers

    ERIC Educational Resources Information Center

    Dropik, Patricia L.; Reichle, Joe

    2008-01-01

    Purpose: Directed scanning and group-item scanning both represent options for increased scanning efficiency. This investigation compared accuracy and speed of selection with preschoolers using each scanning method. The study's purpose was to describe performance characteristics of typically developing children and to provide a reliable assessment…

  20. The problem of selecting an optimum frequency for a measuring generator in determining the value of the hydrophysical parameter with a given accuracy

    NASA Technical Reports Server (NTRS)

    Stepanyuk, V. A.

    1974-01-01

    The selection of the optimum frequency for a measuring generator for determining the value of the hydrophysical parameter with a given degree of accuracy is discussed. Methods from information theory for measuring generators are described. Conversion of the frequency of generators into digital form by means of statistical averaging is also described.

  1. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  2. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  3. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  4. Robust variable selection method for nonparametric differential equation models with application to nonlinear dynamic gene regulatory network analysis.

    PubMed

    Lu, Tao

    2016-01-01

    The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples. PMID:26098537

  5. ZCURVE 3.0: identify prokaryotic genes with higher accuracy as well as automatically and accurately select essential genes.

    PubMed

    Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao

    2015-07-01

    In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. PMID:25977299

  6. An Analysis of the Selected Materials Used in Step Measurements During Pre-Fits of Thermal Protection System Tiles and the Accuracy of Measurements Made Using These Selected Materials

    NASA Technical Reports Server (NTRS)

    Kranz, David William

    2010-01-01

    The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.

  7. Prospects of Genomic Prediction in the USDA Soybean Germplasm Collection: Historical Data Creates Robust Models for Enhancing Selection of Accessions

    PubMed Central

    Jarquin, Diego; Specht, James; Lorenz, Aaron

    2016-01-01

    The identification and mobilization of useful genetic variation from germplasm banks for use in breeding programs is critical for future genetic gain and protection against crop pests. Plummeting costs of next-generation sequencing and genotyping is revolutionizing the way in which researchers and breeders interface with plant germplasm collections. An example of this is the high density genotyping of the entire USDA Soybean Germplasm Collection. We assessed the usefulness of 50K single nucleotide polymorphism data collected on 18,480 domesticated soybean (Glycine max) accessions and vast historical phenotypic data for developing genomic prediction models for protein, oil, and yield. Resulting genomic prediction models explained an appreciable amount of the variation in accession performance in independent validation trials, with correlations between predicted and observed reaching up to 0.92 for oil and protein and 0.79 for yield. The optimization of training set design was explored using a series of cross-validation schemes. It was found that the target population and environment need to be well represented in the training set. Second, genomic prediction training sets appear to be robust to the presence of data from diverse geographical locations and genetic clusters. This finding, however, depends on the influence of shattering and lodging, and may be specific to soybean with its presence of maturity groups. The distribution of 7608 nonphenotyped accessions was examined through the application of genomic prediction models. The distribution of predictions of phenotyped accessions was representative of the distribution of predictions for nonphenotyped accessions, with no nonphenotyped accessions being predicted to fall far outside the range of predictions of phenotyped accessions. PMID:27247288

  8. Prospects of Genomic Prediction in the USDA Soybean Germplasm Collection: Historical Data Creates Robust Models for Enhancing Selection of Accessions.

    PubMed

    Jarquin, Diego; Specht, James; Lorenz, Aaron

    2016-01-01

    The identification and mobilization of useful genetic variation from germplasm banks for use in breeding programs is critical for future genetic gain and protection against crop pests. Plummeting costs of next-generation sequencing and genotyping is revolutionizing the way in which researchers and breeders interface with plant germplasm collections. An example of this is the high density genotyping of the entire USDA Soybean Germplasm Collection. We assessed the usefulness of 50K single nucleotide polymorphism data collected on 18,480 domesticated soybean (Glycine max) accessions and vast historical phenotypic data for developing genomic prediction models for protein, oil, and yield. Resulting genomic prediction models explained an appreciable amount of the variation in accession performance in independent validation trials, with correlations between predicted and observed reaching up to 0.92 for oil and protein and 0.79 for yield. The optimization of training set design was explored using a series of cross-validation schemes. It was found that the target population and environment need to be well represented in the training set. Second, genomic prediction training sets appear to be robust to the presence of data from diverse geographical locations and genetic clusters. This finding, however, depends on the influence of shattering and lodging, and may be specific to soybean with its presence of maturity groups. The distribution of 7608 nonphenotyped accessions was examined through the application of genomic prediction models. The distribution of predictions of phenotyped accessions was representative of the distribution of predictions for nonphenotyped accessions, with no nonphenotyped accessions being predicted to fall far outside the range of predictions of phenotyped accessions. PMID:27247288

  9. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures. In this paper, a systematic approach towards establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters, but at different locations in a powder bed and in different laser scanning directions. The difference in melt track widths and depths captures the effect of changes in incident beam power distribution due to location and processing direction. The experimental results are used in combination with numerical model, and subjected to uncertainty and reliability analysis. Cumulative probability distribution functions obtained for melt track widths and depths are found to be coherent with observed experimental values. The technique is subsequently extended for reliability characterization of single layers produced on a thick powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity.

  10. Multiple Oral Rereading: A Descriptive Study of Its Effects on Reading Speed and Accuracy in Selected First-Grade Children.

    ERIC Educational Resources Information Center

    Moyer, Sandra Brown

    Multiple Oral Rereading (MOR), which involves repeated reading of the same instructional unit, has been found effective in remedial reading instruction. In this study, which was designed to provide basic information about the dynamics of such repetition, 32 first-grade children were selected as subjects on the basis of their ability to read, out…

  11. Accuracy of genomic prediction for BCWD resistance in rainbow trout using different genotyping platforms and genomic selection models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, we aimed to (1) predict genomic estimated breeding value (GEBV) for bacterial cold water disease (BCWD) resistance by genotyping training (n=583) and validation samples (n=53) with two genotyping platforms (24K RAD-SNP and 49K SNP) and using different genomic selection (GS) models (Ba...

  12. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  13. Prognostic gene signatures for patient stratification in breast cancer - accuracy, stability and interpretability of gene selection approaches using prior knowledge on protein-protein interactions

    PubMed Central

    2012-01-01

    Background Stratification of patients according to their clinical prognosis is a desirable goal in cancer treatment in order to achieve a better personalized medicine. Reliable predictions on the basis of gene signatures could support medical doctors on selecting the right therapeutic strategy. However, during the last years the low reproducibility of many published gene signatures has been criticized. It has been suggested that incorporation of network or pathway information into prognostic biomarker discovery could improve prediction performance. In the meanwhile a large number of different approaches have been suggested for the same purpose. Methods We found that on average incorporation of pathway information or protein interaction data did not significantly enhance prediction performance, but indeed greatly interpretability of gene signatures. Some methods (specifically network-based SVMs) could greatly enhance gene selection stability, but revealed only a comparably low prediction accuracy, whereas Reweighted Recursive Feature Elimination (RRFE) and average pathway expression led to very clearly interpretable signatures. In addition, average pathway expression, together with elastic net SVMs, showed the highest prediction performance here. Results The results indicated that no single algorithm to perform best with respect to all three categories in our study. Incorporating network of prior knowledge into gene selection methods in general did not significantly improve classification accuracy, but greatly interpretability of gene signatures compared to classical algorithms. PMID:22548963

  14. Identification of selective inhibitors of RET and comparison with current clinical candidates through development and validation of a robust screening cascade

    PubMed Central

    Watson, Amanda J.; Hopkins, Gemma V.; Hitchin, Samantha; Begum, Habiba; Jones, Stuart; Jordan, Allan; Holt, Sarah; March, H. Nikki; Newton, Rebecca; Small, Helen; Stowell, Alex; Waddell, Ian D.; Waszkowycz, Bohdan; Ogilvie, Donald J.

    2016-01-01

    RET (REarranged during Transfection) is a receptor tyrosine kinase, which plays pivotal roles in regulating cell survival, differentiation, proliferation, migration and chemotaxis. Activation of RET is a mechanism of oncogenesis in medullary thyroid carcinomas where both germline and sporadic activating somatic mutations are prevalent. At present, there are no known specific RET inhibitors in clinical development, although many potent inhibitors of RET have been opportunistically identified through selectivity profiling of compounds initially designed to target other tyrosine kinases. Vandetanib and cabozantinib, both multi-kinase inhibitors with RET activity, are approved for use in medullary thyroid carcinoma, but additional pharmacological activities, most notably inhibition of vascular endothelial growth factor - VEGFR2 (KDR), lead to dose-limiting toxicity. The recent identification of RET fusions present in ~1% of lung adenocarcinoma patients has renewed interest in the identification and development of more selective RET inhibitors lacking the toxicities associated with the current treatments. In an earlier publication [Newton et al, 2016; 1] we reported the discovery of a series of 2-substituted phenol quinazolines as potent and selective RET kinase inhibitors. Here we describe the development of the robust screening cascade which allowed the identification and advancement of this chemical series.  Furthermore we have profiled a panel of RET-active clinical compounds both to validate the cascade and to confirm that none display a RET-selective target profile. PMID:27429741

  15. Accuracy and Usefulness of Select Methods for Assessing Complete Collection of 24-Hour Urine: A Systematic Review.

    PubMed

    John, Katherine A; Cogswell, Mary E; Campbell, Norm R; Nowson, Caryl A; Legetic, Branka; Hennis, Anselm J M; Patel, Sheena M

    2016-05-01

    Twenty-four-hour urine collection is the recommended method for estimating sodium intake. To investigate the strengths and limitations of methods used to assess completion of 24-hour urine collection, the authors systematically reviewed the literature on the accuracy and usefulness of methods vs para-aminobenzoic acid (PABA) recovery (referent). The percentage of incomplete collections, based on PABA, was 6% to 47% (n=8 studies). The sensitivity and specificity for identifying incomplete collection using creatinine criteria (n=4 studies) was 6% to 63% and 57% to 99.7%, respectively. The most sensitive method for removing incomplete collections was a creatinine index <0.7. In pooled analysis (≥2 studies), mean urine creatinine excretion and volume were higher among participants with complete collection (P<.05); whereas, self-reported collection time did not differ by completion status. Compared with participants with incomplete collection, mean 24-hour sodium excretion was 19.6 mmol higher (n=1781 specimens, 5 studies) in patients with complete collection. Sodium excretion may be underestimated by inclusion of incomplete 24-hour urine collections. None of the current approaches reliably assess completion of 24-hour urine collection. PMID:26726000

  16. Allele frequency-based analyses robustly map sequence sites under balancing selection in a malaria vaccine candidate antigen.

    PubMed Central

    Polley, Spencer D; Chokejindachai, Watcharee; Conway, David J

    2003-01-01

    The Plasmodium falciparum apical membrane antigen 1 (AMA1) is a leading candidate for a malaria vaccine. Here, within-population analyses of alleles from 50 Thai P. falciparum isolates yield significant evidence for balancing selection on polymorphisms within the disulfide-bonded domains I and III of the surface accessible ectodomain of AMA1, a result very similar to that seen previously in a Nigerian population. Studying the frequency of nucleotide polymorphisms in both populations shows that the between-population component of variance (F(ST)) is significantly lower in domains I and III compared to the intervening domain II and compared to 11 unlinked microsatellite loci. A nucleotide site-by-site analysis shows that sites with exceptionally high or low F(ST) values cluster significantly into serial runs, with four runs of low values in domain I and one in domain III. These runs may map the sequences that are consistently under the strongest balancing selection from naturally acquired immune responses. PMID:14573469

  17. A Robust Highly Interpenetrated Metal−Organic Framework Constructed from Pentanuclear Clusters for Selective Sorption of Gas Molecules

    SciTech Connect

    Zhang, Zhangjing; Xiang, Shengchang; Chen, Yu-Sheng; Ma, Shengqian; Lee, Yongwoo; Phely-Bobin, Thomas; Chen, Banglin

    2010-10-22

    A three-dimensional microporous metal-organic framework, Zn{sub 5}(BTA){sub 6}(TDA){sub 2} {center_dot} 15DMF {center_dot} 8H{sub 2}O (1; HBTA = 1,2,3-benzenetriazole; H{sub 2}TDA = thiophene-2,5-dicarboxylic acid), comprising pentanuclear [Zn{sub 5}] cluster units, was obtained through an one-pot solvothermal reaction of Zn(NO{sub 3}){sub 2}, 1,2,3-benzenetriazole, and thiophene-2,5-dicarboxylate. The activated 1 displays type-I N{sub 2} gas sorption behavior with a Langmuir surface area of 607 m{sup 2} g{sup -1} and exhibits interesting selective gas adsorption for C{sub 2}H{sub 2}/CH{sub 4} and CO{sub 2}/CH{sub 4}.

  18. A robust microporous metal-organic framework as a highly selective and sensitive, instantaneous and colorimetric sensor for Eu³⁺ ions.

    PubMed

    Gao, Yanfei; Zhang, Xueqiong; Sun, Wei; Liu, Zhiliang

    2015-01-28

    An extremely thermostable magnesium metal-organic framework (Mg-MOF) is reported for use as a highly selective and sensitive, instantaneous and colorimetric sensor for Eu(3+) ions. There has been extensive interest in the recognition and sensing of ions because of their important roles in biological and environmental systems. However, only a few of these systems have been explored for specific rare earth ion detection. A robust microporous Mg-MOF for the recognition and sensing of Eu(3+) ions with high selectivity at low concentrations in aqueous solutions has been synthesized. This stable metal-organic framework (MOF) contains nanoscale holes and non-coordinating nitrogen atoms inside the walls of the holes, which makes it a potential host for foreign metal ions. Based on the energy level matching and efficient energy transfer between the host and the guest, the Mg-MOF sensor is both highly selective and sensitive as well as instantaneous; thus, it is a promising approach for the development of luminescent probing materials with unprecedented applications and its use as an Eu(3+) ion sensor. PMID:25478996

  19. Compact and phase-error-robust multilayered AWG-based wavelength selective switch driven by a single LCOS.

    PubMed

    Sorimoto, Keisuke; Tanizawa, Ken; Uetsuka, Hisato; Kawashima, Hitoshi; Mori, Masahiko; Hasama, Toshifumi; Ishikawa, Hiroshi; Tsuda, Hiroyuki

    2013-07-15

    A novel liquid crystal on silicon (LCOS)-based wavelength selective switch (WSS) is proposed, fabricated, and demonstrated. It employs a multilayered arrayed waveguide grating (AWG) as a wavelength multiplex/demultiplexer. The LCOS deflects spectrally decomposed beams channel by channel and switches them to desired waveguide layers of the multilayered AWG. In order to obtain the multilayered AWG with high yield, phase errors of the AWG is externally compensated for by an additional phase modulation with the LCOS. This additional phase modulation is applied to the equivalent image of the facet of the AWG, which is projected by a relay lens. In our previously-reported WSS configuration, somewhat large footprint and increased cost were the drawbacks, since two LCOSs were required: one LCOS was driven for the inter-port switching operation, and the other was for the phase-error compensation. In the newly proposed configuration, on the other hand, both switching and compensation operations are performed using a single LCOS. This reduction of the component count is realized by introducing the folded configuration with a reflector. The volume of the WSS optics is 80 × 100 × 60 mm3, which is approximately 40% smaller than the previous configuration. The polarization-dependent loss and inter-channel crosstalk are less than 1.5 dB and -21.0 dB, respectively. An error-free transmission of 40-Gbit/s NRZ-OOK signal through the WSS is successfully demonstrated. PMID:23938561

  20. Predicting romantic interest and decisions in the very early stages of mate selection: standards, accuracy, and sex differences.

    PubMed

    Fletcher, Garth J O; Kerr, Patrick S G; Li, Norman P; Valentine, Katherine A

    2014-04-01

    In the current study, opposite-sex strangers had 10-min conversations with a possible further date in mind. Based on judgments from partners and observers, three main findings were produced. First, judgments of attractiveness/vitality perceptions (compared with warmth/trustworthiness and status/resources) were the most accurate and were predominant in influencing romantic interest and decisions about further contact. Second, women were more cautious and choosy than men-women underestimated their partner's romantic interest, whereas men exaggerated it, and women were less likely to want further contact. Third, a mediational model found that women (compared with men) were less likely to want further contact because they perceived their partners as possessing less attractiveness/vitality and as falling shorter of their minimum standards of attractiveness/vitality, thus generating lower romantic interest. These novel results are discussed in terms of the mixed findings from prior research, evolutionary psychology, and the functionality of lay psychology in early mate-selection contexts. PMID:24501043

  1. Selective logging in tropical forests decreases the robustness of liana-tree interaction networks to the loss of host tree species.

    PubMed

    Magrach, Ainhoa; Senior, Rebecca A; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F; Santamaria, Luis; Edwards, David P

    2016-03-16

    Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the 'health' and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana-tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of 'extinction debt'. PMID:26936241

  2. Influence of Raw Image Preprocessing and Other Selected Processes on Accuracy of Close-Range Photogrammetric Systems According to Vdi 2634

    NASA Astrophysics Data System (ADS)

    Reznicek, J.; Luhmann, T.; Jepping, C.

    2016-06-01

    This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements) by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars) with known lengths (typical uncertainty = 5 μm at 2 sigma). The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  3. Short communication: Selecting the most informative mid-infrared spectra wavenumbers to improve the accuracy of prediction models for detailed milk protein content.

    PubMed

    Niero, G; Penasa, M; Gottardo, P; Cassandro, M; De Marchi, M

    2016-03-01

    The objective of this study was to investigate the ability of mid-infrared spectroscopy (MIRS) to predict protein fraction contents of bovine milk samples by applying uninformative variable elimination (UVE) procedure to select the most informative wavenumber variables before partial least squares (PLS) analysis. Reference values (n=114) of protein fractions were measured using reversed-phase HPLC and spectra were acquired through MilkoScan FT6000 (Foss Electric A/S, Hillerød, Denmark). Prediction models were built using the full data set and tested with a leave-one-out cross-validation. Compared with MIRS models developed using standard PLS, the UVE procedure reduced the number of wavenumber variables to be analyzed through PLS regression and improved the accuracy of prediction by 6.0 to 66.7%. Good predictions were obtained for total protein, total casein (CN), and α-CN, which included αS1- and αS2-CN; moderately accurate predictions were observed for κ-CN and total whey protein; and unsatisfactory results were obtained for β-CN, α-lactalbumin, and β-lactoglobulin. Results indicated that UVE combined with PLS is a valid approach to enhance the accuracy of MIRS prediction models for milk protein fractions. PMID:26774721

  4. Transcriptomic Characterization of Innate and Acquired Immune Responses in Red-Legged Partridges (Alectoris rufa): A Resource for Immunoecology and Robustness Selection

    PubMed Central

    Sevane, Natalia; Cañon, Javier; Gil, Ignacio; Dunner, Susana

    2015-01-01

    Present and future challenges for wild partridge populations include the resistance against possible disease transmission after restocking with captive-reared individuals, and the need to cope with the stress prompted by new dynamic and challenging scenarios. Selection of individuals with the best immune ability may be a good strategy to improve general immunity, and hence adaptation to stress. In this study, non-infectious challenges with phytohemagglutinin (PHA) and sheep red blood cells allowed the classification of red-legged partridges (Alectoris rufa) according to their overall immune responses (IR). Skin from the area of injection of PHA and spleen, both from animals showing extreme high and low IR, were selected to investigate the transcriptional profiles underlying the different ability to cope with pathogens and external aggressions. RNA-seq yielded 97 million raw reads from eight sequencing libraries and approximately 84% of the processed reads were mapped to the reference chicken genome. Differential expression analysis identified 1488 up- and 107 down-regulated loci in individuals with high IR versus low IR. Partridges displaying higher innate IR show an enhanced activation of host defence gene pathways complemented with a tightly controlled desensitization that facilitates the return to cellular homeostasis. These findings indicate that the immune system ability to respond to aggressions (either diseases or stress produced by environmental changes) involves extensive transcriptional and post-transcriptional regulations, and expand our understanding on the molecular mechanisms of the avian immune system, opening the possibility of improving disease resistance or robustness using genome assisted selection (GAS) approaches for increased IR in partridges by using genes such as AVN or BF2 as markers. This study provides the first transcriptome sequencing data of the Alectoris genus, a resource for molecular ecology that enables integration of genomic tools

  5. Identification of GDC-0810 (ARN-810), an Orally Bioavailable Selective Estrogen Receptor Degrader (SERD) that Demonstrates Robust Activity in Tamoxifen-Resistant Breast Cancer Xenografts.

    PubMed

    Lai, Andiliy; Kahraman, Mehmet; Govek, Steven; Nagasawa, Johnny; Bonnefous, Celine; Julien, Jackie; Douglas, Karensa; Sensintaffar, John; Lu, Nhin; Lee, Kyoung-Jin; Aparicio, Anna; Kaufman, Josh; Qian, Jing; Shao, Gang; Prudente, Rene; Moon, Michael J; Joseph, James D; Darimont, Beatrice; Brigham, Daniel; Grillot, Kate; Heyman, Richard; Rix, Peter J; Hager, Jeffrey H; Smith, Nicholas D

    2015-06-25

    Approximately 80% of breast cancers are estrogen receptor alpha (ER-α) positive, and although women typically initially respond well to antihormonal therapies such as tamoxifen and aromatase inhibitors, resistance often emerges. Although a variety of resistance mechanism may be at play in this state, there is evidence that in many cases the ER still plays a central role, including mutations in the ER leading to constitutively active receptor. Fulvestrant is a steroid-based, selective estrogen receptor degrader (SERD) that both antagonizes and degrades ER-α and is active in patients who have progressed on antihormonal agents. However, fulvestrant suffers from poor pharmaceutical properties and must be administered by intramuscular injections that limit the total amount of drug that can be administered and hence lead to the potential for incomplete receptor blockade. We describe the identification and characterization of a series of small-molecule, orally bioavailable SERDs which are potent antagonists and degraders of ER-α and in which the ER-α degrading properties were prospectively optimized. The lead compound 11l (GDC-0810 or ARN-810) demonstrates robust activity in models of tamoxifen-sensitive and tamoxifen-resistant breast cancer, and is currently in clinical trials in women with locally advanced or metastatic estrogen receptor-positive breast cancer. PMID:25879485

  6. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  7. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    PubMed

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for p<0.05). Also, iRR can be a fast alternative to iPLS, especially in case of unknown degree of complexity of analyzed system, i.e. if upper limit of number of latent variables is not easily estimated for iPLS. Adulteration of hempseed (H) oil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEP<1.2%). This means that FTIR-ATR coupled with iRR can very rapidly and effectively determine the level of adulteration in the adulterated hempseed oil (R(2)>0.99). PMID:26838379

  8. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  9. The relative accuracy of standard estimators for macrofaunal abundance and species richness derived from selected intertidal transect designs used to sample exposed sandy beaches

    NASA Astrophysics Data System (ADS)

    Schoeman, transect designs used to sample exposed sandy beaches D. S.; Wheeler, M.; Wait, M.

    2003-10-01

    In order to ensure that patterns detected in field samples reflect real ecological processes rather than methodological idiosyncrasies, it is important that researchers attempt to understand the consequences of the sampling and analytical designs that they select. This is especially true for sandy beach ecology, which has lagged somewhat behind ecological studies of other intertidal habitats. This paper investigates the performance of routine estimators of macrofaunal abundance and species richness, which are variables that have been widely used to infer predictable patterns of biodiversity across a gradient of beach types. To do this, a total of six shore-normal strip transects were sampled on three exposed, oceanic sandy beaches in the Eastern Cape, South Africa. These transects comprised contiguous quadrats arranged linearly between the spring high and low water marks. Using simple Monte Carlo simulation techniques, data collected from the strip transects were used to assess the accuracy of parameter estimates from different sampling strategies relative to their true values (macrofaunal abundance ranged 595-1369 individuals transect -1; species richness ranged 12-21 species transect -1). Results indicated that estimates from the various transect methods performed in a similar manner both within beaches and among beaches. Estimates for macrofaunal abundance tended to be negatively biased, especially at levels of sampling effort most commonly reported in the literature, and accuracy decreased with decreasing sampling effort. By the same token, estimates for species richness were always negatively biased and were also characterised by low precision. Furthermore, triplicate transects comprising a sampled area in the region of 4 m 2 (as has been previously recommended) are expected to miss more than 30% of the species that occur on the transect. Surprisingly, for both macrofaunal abundance and species richness, estimates based on data from transects sampling quadrats

  10. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  11. Robust quantitative scratch assay

    PubMed Central

    Vargas, Andrea; Angeli, Marc; Pastrello, Chiara; McQuaid, Rosanne; Li, Han; Jurisicova, Andrea; Jurisica, Igor

    2016-01-01

    The wound healing assay (or scratch assay) is a technique frequently used to quantify the dependence of cell motility—a central process in tissue repair and evolution of disease—subject to various treatments conditions. However processing the resulting data is a laborious task due its high throughput and variability across images. This Robust Quantitative Scratch Assay algorithm introduced statistical outputs where migration rates are estimated, cellular behaviour is distinguished and outliers are identified among groups of unique experimental conditions. Furthermore, the RQSA decreased measurement errors and increased accuracy in the wound boundary at comparable processing times compared to previously developed method (TScratch). Availability and implementation: The RQSA is freely available at: http://ophid.utoronto.ca/RQSA/RQSA_Scripts.zip. The image sets used for training and validation and results are available at: (http://ophid.utoronto.ca/RQSA/trainingSet.zip, http://ophid.utoronto.ca/RQSA/validationSet.zip, http://ophid.utoronto.ca/RQSA/ValidationSetResults.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975Results.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip). Supplementary Material is provided for detailed description of the development of the RQSA. Contact: juris@ai.utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26722119

  12. A robust tool for discriminative analysis and feature selection in paired samples impacts the identification of the genes essential for reprogramming lung tissue to adenocarcinoma

    PubMed Central

    2011-01-01

    Background Lung cancer is the leading cause of cancer deaths in the world. The most common type of lung cancer is lung adenocarcinoma (AC). The genetic mechanisms of the early stages and lung AC progression steps are poorly understood. There is currently no clinically applicable gene test for the early diagnosis and AC aggressiveness. Among the major reasons for the lack of reliable diagnostic biomarkers are the extraordinary heterogeneity of the cancer cells, complex and poorly understudied interactions of the AC cells with adjacent tissue and immune system, gene variation across patient cohorts, measurement variability, small sample sizes and sub-optimal analytical methods. We suggest that gene expression profiling of the primary tumours and adjacent tissues (PT-AT) handled with a rational statistical and bioinformatics strategy of biomarker prediction and validation could provide significant progress in the identification of clinical biomarkers of AC. To minimise sample-to-sample variability, repeated multivariate measurements in the same object (organ or tissue, e.g. PT-AT in lung) across patients should be designed, but prediction and validation on the genome scale with small sample size is a great methodical challenge. Results To analyse PT-AT relationships efficiently in the statistical modelling, we propose an Extreme Class Discrimination (ECD) feature selection method that identifies a sub-set of the most discriminative variables (e.g. expressed genes). Our method consists of a paired Cross-normalization (CN) step followed by a modified sign Wilcoxon test with multivariate adjustment carried out for each variable. Using an Affymetrix U133A microarray paired dataset of 27 AC patients, we reviewed the global reprogramming of the transcriptome in human lung AC tissue versus normal lung tissue, which is associated with about 2,300 genes discriminating the tissues with 100% accuracy. Cluster analysis applied to these genes resulted in four distinct gene groups

  13. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  14. Robust snapshot interferometric spectropolarimetry.

    PubMed

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy. PMID:27176992

  15. Robust Vertex Classification.

    PubMed

    Chen, Li; Shen, Cencheng; Vogelstein, Joshua T; Priebe, Carey E

    2016-03-01

    For random graphs distributed according to stochastic blockmodels, a special case of latent position graphs, adjacency spectral embedding followed by appropriate vertex classification is asymptotically Bayes optimal; but this approach requires knowledge of and critically depends on the model dimension. In this paper, we propose a sparse representation vertex classifier which does not require information about the model dimension. This classifier represents a test vertex as a sparse combination of the vertices in the training set and uses the recovered coefficients to classify the test vertex. We prove consistency of our proposed classifier for stochastic blockmodels, and demonstrate that the sparse representation classifier can predict vertex labels with higher accuracy than adjacency spectral embedding approaches via both simulation studies and real data experiments. Our results demonstrate the robustness and effectiveness of our proposed vertex classifier when the model dimension is unknown. PMID:26340770

  16. Robust multi-scale superpixel classification for optic cup localization.

    PubMed

    Tan, Ngan-Meng; Xu, Yanwu; Goh, Wooi Boon; Liu, Jiang

    2015-03-01

    This paper presents an optimal model integration framework to robustly localize the optic cup in fundus images for glaucoma detection. This work is based on the existing superpixel classification approach and makes two major contributions. First, it addresses the issues of classification performance variations due to repeated random selection of training samples, and offers a better localization solution. Second, multiple superpixel resolutions are integrated and unified for better cup boundary adherence. Compared to the state-of-the-art intra-image learning approach, we demonstrate improvements in optic cup localization accuracy with full cup-to-disc ratio range, while incurring only minor increase in computing cost. PMID:25453464

  17. Robust design of some selective matrix metalloproteinase-2 inhibitors over matrix metalloproteinase-9 through in silico/fragment-based lead identification and de novo lead modification: Syntheses and biological assays.

    PubMed

    Adhikari, Nilanjan; Halder, Amit K; Mallick, Sumana; Saha, Achintya; Saha, Kishna D; Jha, Tarun

    2016-09-15

    Broad range of selectivity possesses serious limitation for the development of matrix metalloproteinase-2 (MMP-2) inhibitors for clinical purposes. To develop potent and selective MMP-2 inhibitors, initially multiple molecular modeling techniques were adopted for robust design. Predictive and validated regression models (2D and 3D QSAR and ligand-based pharmacophore mapping studies) were utilized for estimating the potency whereas classification models (Bayesian and recursive partitioning analyses) were used for determining the selectivity of MMP-2 inhibitors over MMP-9. Bayesian model fingerprints were used to design selective lead molecule which was modified using structure-based de novo technique. A series of designed molecules were prepared and screened initially for inhibitions of MMP-2 and MMP-9, respectively, as these are designed followed by other MMPs to observe the broader selectivity. The best active MMP-2 inhibitor had IC50 value of 24nM whereas the best selective inhibitor (IC50=51nM) showed at least 4 times selectivity to MMP-2 against all tested MMPs. Active derivatives were non-cytotoxic against human lung carcinoma cell line-A549. At non-cytotoxic concentrations, these inhibitors reduced intracellular MMP-2 expression up to 78% and also exhibited satisfactory anti-migration and anti-invasive properties against A549 cells. Some of these active compounds may be used as adjuvant therapeutic agents in lung cancer after detailed study. PMID:27452283

  18. Psychology Textbooks: Examining Their Accuracy

    ERIC Educational Resources Information Center

    Steuer, Faye B.; Ham, K. Whitfield, II

    2008-01-01

    Sales figures and recollections of psychologists indicate textbooks play a central role in psychology students' education, yet instructors typically must select texts under time pressure and with incomplete information. Although selection aids are available, none adequately address the accuracy of texts. We describe a technique for sampling…

  19. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    PubMed

    Kim, Mooeung; Chung, Hoeil

    2013-03-01

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed. PMID:23342358

  20. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems

    NASA Technical Reports Server (NTRS)

    Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.

  1. Development and validation of a robust and sensitive assay for the discovery of selective inhibitors for serine/threonine protein phosphatases PP1α (PPP1C) and PP5 (PPP5C).

    PubMed

    Swingle, Mark R; Honkanen, Richard E

    2014-10-01

    Protein phosphatase types 1 α (PP1α/PPP1C) and 5 (PP5/PPP5C) are members of the PPP family of serine/threonine protein phosphatases. PP1 and PP5 share a common catalytic mechanism, and several natural compounds, including okadaic acid, microcystin, and cantharidin, act as strong inhibitors of both enzymes. However, to date there have been no reports of compounds that can selectively inhibit PP1 or PP5, and specific or highly selective inhibitors for either PP1 or PP5 are greatly desired by both the research and pharmaceutical communities. Here we describe the development and optimization of a sensitive and robust (representative PP5C assay data: Z'=0.93; representative PP1Cα assay data: Z'=0.90) fluorescent phosphatase assay that can be used to simultaneously screen chemical libraries and natural product extracts for the presence of catalytic inhibitors of PP1 and PP5. PMID:25383722

  2. Development and Validation of a Robust and Sensitive Assay for the Discovery of Selective Inhibitors for Serine/Threonine Protein Phosphatases PP1α (PPP1C) and PP5 (PPP5C)

    PubMed Central

    Swingle, Mark R.

    2014-01-01

    Abstract Protein phosphatase types 1 α (PP1α/PPP1C) and 5 (PP5/PPP5C) are members of the PPP family of serine/threonine protein phosphatases. PP1 and PP5 share a common catalytic mechanism, and several natural compounds, including okadaic acid, microcystin, and cantharidin, act as strong inhibitors of both enzymes. However, to date there have been no reports of compounds that can selectively inhibit PP1 or PP5, and specific or highly selective inhibitors for either PP1 or PP5 are greatly desired by both the research and pharmaceutical communities. Here we describe the development and optimization of a sensitive and robust (representative PP5C assay data: Z′=0.93; representative PP1Cα assay data: Z′=0.90) fluorescent phosphatase assay that can be used to simultaneously screen chemical libraries and natural product extracts for the presence of catalytic inhibitors of PP1 and PP5. PMID:25383722

  3. Development and validation of a selective and robust LC-MS/MS method for high-throughput quantifying rizatriptan in small plasma samples: application to a clinical pharmacokinetic study.

    PubMed

    Chen, Yi; Miao, Haijun; Lin, Mei; Fan, Guorong; Hong, Zhanying; Wu, Huiling; Wu, Yutian

    2006-12-01

    An analytical method based on liquid chromatography with positive ion electrospray ionization (ESI) coupled to tandem mass spectrometry detection (LC-MS/MS) was developed for the determination of a potent 5-HT(1B/1D) receptor agonist, rizatriptan in human plasma using granisetron as the internal standard. The analyte and internal standard were isolated from 100 microL plasma samples by liquid-liquid extraction (LLE) and chromatographed on a Lichrospher C18 column (4.6mm x 50mm, 5 microm) with a mobile phase consisting of acetonitrile-10mM aqueous ammonium acetate-acetic acid (50:50:0.5, v/v/v) pumped at 1.0 mL/min. The method had a chromatographic total run time of 2 min. A Varian 1200 L electrospray tandem mass spectrometer equipped with an electrospray ionization source was operated in selected reaction monitoring (SRM) mode with the precursor-to-product ion transitions m/z 270-->201 (rizatriptan) and 313.4-->138 (granisetron) used for quantitation. The assay was validated over the concentration range of 0.05-50 ng/mL and was found to have acceptable accuracy, precision, linearity, and selectivity. The mean extraction recovery from spiked plasma samples was above 98%. The intra-day accuracy of the assay was within 12% of nominal and intra-day precision was better than 13% C.V. Following a 10mg dose of the compound administered to human subjects, mean concentrations of rizatriptan ranged from 0.2 to 70.6 ng/mL in plasma samples collected up to 24h after dosing. Inter-day accuracy and precision results for quality control samples run over a 5-day period alongside clinical samples showed mean accuracies of within 12% of nominal and precision better than 9.5% C.V. PMID:16899417

  4. A robust and luminescent covalent organic framework as a highly sensitive and selective sensor for the detection of Cu(2+) ions.

    PubMed

    Li, Zhongping; Zhang, Yuwei; Xia, Hong; Mu, Ying; Liu, Xiaoming

    2016-05-01

    A hydrogen bond assisted azine-linked covalent organic framework, COF-JLU3, was synthesized under solvothermal conditions. Combining excellent crystallinity, porosity, stability and luminescence, it can be the first COF as a fluorescent sensor for toxic metal ions, exhibiting high sensitivity and selectivity to Cu(2+). PMID:27114234

  5. Increasing the Accuracy in the Measurement of the Minor Isotopes of Uranium: Care in Selection of Reference Materials, Baselines and Detector Calibration

    NASA Astrophysics Data System (ADS)

    Poths, J.; Koepf, A.; Boulyga, S. F.

    2008-12-01

    The minor isotopes of uranium (U-233, U-234, U-236) are increasingly useful for tracing a variety of processes: movement of anthropogenic nuclides in the environment (ref 1), sources of uranium ores (ref 2), and nuclear material attribution (ref 3). We report on improved accuracy for U-234/238 and U-236/238 by supplementing total evaporation protocol TIMS measurement on Faraday detectors (ref 4)with multiplier measurement for the minor isotopes. Measurement of small signals on Faraday detectors alone is limited by noise floors of the amplifiers and accurate measurement of the baseline offsets. The combined detector approach improves the reproducibility to better than ±1% (relative) for the U-234/238 at natural abundance, and yields a detection limit for U-236/U-238 of <0.2 ppm. We have quantified contribution of different factors to the uncertainties associated with these peak jumping measurement on a single detector, with an aim of further improvement. The uncertainties in the certified values for U-234 and U-236 in the uranium standard NBS U005, if used for mass bias correction, dominates the uncertainty in their isotopic ratio measurements. Software limitations in baseline measurement drives the detection limit for the U-236/U-238 ratio. This is a topic for discussion with the instrument manufacturers. Finally, deviation from linearity of the response of the electron multiplier with count rate limits the accuracy and reproducibility of these minor isotope measurements. References: (1) P. Steier et al(2008) Nuc Inst Meth(B), 266, 2246-2250. (2) E. Keegan et al (2008) Appl Geochem 23, 765-777. (3) K. Mayer et al (1998) IAEA-CN-98/11, in Advances in Destructive and Non-destructive Analysis for Environmental Monitoring and Nuclear Forensics. (4) S. Richter and S. Goldberg(2003) Int J Mass Spectrom, 229, 181-197.

  6. An improved robust hand-eye calibration for endoscopy navigation system

    NASA Astrophysics Data System (ADS)

    He, Wei; Kang, Kumsok; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2016-03-01

    Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.

  7. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  8. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  9. MAGE-C2-Specific TCRs Combined with Epigenetic Drug-Enhanced Antigenicity Yield Robust and Tumor-Selective T Cell Responses.

    PubMed

    Kunert, Andre; van Brakel, Mandy; van Steenbergen-Langeveld, Sabine; da Silva, Marvin; Coulie, Pierre G; Lamers, Cor; Sleijfer, Stefan; Debets, Reno

    2016-09-15

    Adoptive T cell therapy has shown significant clinical success for patients with advanced melanoma and other tumors. Further development of T cell therapy requires improved strategies to select effective, yet nonself-reactive, TCRs. In this study, we isolated 10 TCR sequences against four MAGE-C2 (MC2) epitopes from melanoma patients who showed clinical responses following vaccination that were accompanied by significant frequencies of anti-MC2 CD8 T cells in blood and tumor without apparent side effects. We introduced these TCRs into T cells, pretreated tumor cells of different histological origins with the epigenetic drugs azacytidine and valproate, and tested tumor and self-reactivities of these TCRs. Pretreatment of tumor cells upregulated MC2 gene expression and enhanced recognition by T cells. In contrast, a panel of normal cell types did not express MC2 mRNA, and similar pretreatment did not result in recognition by MC2-directed T cells. Interestingly, the expression levels of MC2, but not those of CD80, CD86, or programmed death-ligand 1 or 2, correlated with T cell responsiveness. One of the tested TCRs consistently recognized pretreated MC2(+) cell lines from melanoma, head and neck, bladder, and triple-negative breast cancers but showed no response to MHC-eluted peptides or peptides highly similar to MC2. We conclude that targeting MC2 Ag, combined with epigenetic drug-enhanced antigenicity, allows for significant and tumor-selective T cell responses. PMID:27489285

  10. Target-selective joint polymerase chain reaction: A robust and rapid method for high-throughput production of recombinant monoclonal antibodies from single cells

    PubMed Central

    2011-01-01

    Background During the development of a therapeutic antibody, large numbers of monoclonal antibodies are required to screen for those that are best suited for the desired activity. Although the single cell-based immunoglobulin variable gene cloning technique is a powerful tool, the current methods remain an obstacle to the rapid production of large numbers of recombinant antibodies. Results We have developed a novel overlap extension polymerase chain reaction, the target-selective joint polymerase chain reaction (TS-jPCR), and applied it to the generation of linear immunoglobulin gene expression constructs. TS-jPCR is conducted using a PCR-amplified immunoglobulin variable gene and an immunoglobulin gene-selective cassette (Ig-cassette) that contains all essential elements for antibody expression and overlapping areas of immunoglobulin gene-specific homology. The TS-jPCR technique is simple and specific; the 3'-random nucleotide-tailed immunoglobulin variable gene fragment and the Ig-cassette are assembled into a linear immunoglobulin expression construct, even in the presence of nonspecifically amplified DNA. We also developed a robotic magnetic beads handling instrument for single cell-based cDNA synthesis to amplify immunoglobulin variable genes by rapid amplification of 5' cDNA ends PCR. Using these methods, we were able to produce recombinant monoclonal antibodies from large numbers of single plasma cells within four days. Conclusion Our system reduces the burden of antibody discovery and engineering by rapidly producing large numbers of recombinant monoclonal antibodies in a short period of time. PMID:21774833

  11. Commensurate CO2 Capture, and Shape Selectivity for HCCH over H2CCH2, in Zigzag Channels of a Robust Cu(I)(CN)(L) Metal-Organic Framework.

    PubMed

    Miller, Reece G; Southon, Peter D; Kepert, Cameron J; Brooker, Sally

    2016-06-20

    A novel copper(I) metal-organic framework (MOF), {[Cu(I)2(py-pzpypz)2(μ-CN)2]·MeCN}n (1·MeCN), with an unusual topology is shown to be robust, retaining crystallinity during desolvation to give 1, which has also been structurally characterized [py-pzpypz is 4-(4-pyridyl)-2,5-dipyrazylpyridine)]. Zigzag-shaped channels, which in 1·MeCN were occupied by disordered MeCN molecules, run along the c axis of 1, resulting in a significant solvent-accessible void space (9.6% of the unit cell volume). These tight zigzags, bordered by (Cu(I)CN)n chains, make 1 an ideal candidate for investigations into shape-based selectivity. MOF 1 shows a moderate enthalpy of adsorption for binding CO2 (-32 kJ mol(-1) at moderate loadings), which results in a good selectivity for CO2 over N2 of 4.8:1 under real-world operating conditions of a 15:85 CO2/N2 mixture at 1 bar. Furthermore, 1 was investigated for shape-based selectivity of small hydrocarbons, revealing preferential uptake of linear acetylene gas over ethylene and methane, partially due to kinetic trapping of the guests with larger kinetic diameters. PMID:27258550

  12. Robust expression and secretion of Xylanase1 in Chlamydomonas reinhardtii by fusion to a selection gene and processing with the FMDV 2A peptide.

    PubMed

    Rasala, Beth A; Lee, Philip A; Shen, Zhouxin; Briggs, Steven P; Mendez, Michael; Mayfield, Stephen P

    2012-01-01

    Microalgae have recently received attention as a potential low-cost host for the production of recombinant proteins and novel metabolites. However, a major obstacle to the development of algae as an industrial platform has been the poor expression of heterologous genes from the nuclear genome. Here we describe a nuclear expression strategy using the foot-and-mouth-disease-virus 2A self-cleavage peptide to transcriptionally fuse heterologous gene expression to antibiotic resistance in Chlamydomonas reinhardtii. We demonstrate that strains transformed with ble-2A-GFP are zeocin-resistant and accumulate high levels of GFP that is properly 'cleaved' at the FMDV 2A peptide resulting in monomeric, cytosolic GFP that is easily detectable by in-gel fluorescence analysis or fluorescent microscopy. Furthermore, we used our ble2A nuclear expression vector to engineer the heterologous expression of the industrial enzyme, xylanase. We demonstrate that linking xyn1 expression to ble2A expression on the same open reading frame led to a dramatic (~100-fold) increase in xylanase activity in cells lysates compared to the unlinked construct. Finally, by inserting an endogenous secretion signal between the ble2A and xyn1 coding regions, we were able to target monomeric xylanase for secretion. The novel microalgae nuclear expression strategy described here enables the selection of transgenic lines that are efficiently expressing the heterologous gene-of-interest and should prove valuable for basic research as well as algal biotechnology. PMID:22937037

  13. An ant-plant by-product mutualism is robust to selective logging of rain forest and conversion to oil palm plantation.

    PubMed

    Fayle, Tom M; Edwards, David P; Foster, William A; Yusah, Kalsum M; Turner, Edgar C

    2015-06-01

    Anthropogenic disturbance and the spread of non-native species disrupt natural communities, but also create novel interactions between species. By-product mutualisms, in which benefits accrue as side effects of partner behaviour or morphology, are often non-specific and hence may persist in novel ecosystems. We tested this hypothesis for a two-way by-product mutualism between epiphytic ferns and their ant inhabitants in the Bornean rain forest, in which ants gain housing in root-masses while ferns gain protection from herbivores. Specifically, we assessed how the specificity (overlap between fern and ground-dwelling ants) and the benefits of this interaction are altered by selective logging and conversion to an oil palm plantation habitat. We found that despite the high turnover of ant species, ant protection against herbivores persisted in modified habitats. However, in ferns growing in the oil palm plantation, ant occupancy, abundance and species richness declined, potentially due to the harsher microclimate. The specificity of the fern-ant interactions was also lower in the oil palm plantation habitat than in the forest habitats. We found no correlations between colony size and fern size in modified habitats, and hence no evidence for partner fidelity feedbacks, in which ants are incentivised to protect fern hosts. Per species, non-native ant species in the oil palm plantation habitat (18 % of occurrences) were as important as native ones in terms of fern protection and contributed to an increase in ant abundance and species richness with fern size. We conclude that this by-product mutualism persists in logged forest and oil palm plantation habitats, with no detectable shift in partner benefits. Such persistence of generalist interactions in novel ecosystems may be important for driving ecosystem functioning. PMID:25575674

  14. Assessing the impact of end-member selection on the accuracy of satellite-based spatial variability models for actual evapotranspiration estimation

    NASA Astrophysics Data System (ADS)

    Long, Di; Singh, Vijay P.

    2013-05-01

    This study examines the impact of end-member (i.e., hot and cold extremes) selection on the performance and mechanisms of error propagation in satellite-based spatial variability models for estimating actual evapotranspiration, using the triangle, surface energy balance algorithm for land (SEBAL), and mapping evapotranspiration with high resolution and internalized calibration (METRIC) models. These models were applied to the soil moisture-atmosphere coupling experiment site in central Iowa on two Landsat Thematic Mapper/Enhanced Thematic Mapper Plus acquisition dates in 2002. Evaporative fraction (EF, defined as the ratio of latent heat flux to availability energy) estimates from the three models at field and watershed scales were examined using varying end-members. Results show that the end-members fundamentally determine the magnitudes of EF retrievals at both field and watershed scales. The hot and cold extremes exercise a similar impact on the discrepancy between the EF estimates and the ground-based measurements, i.e., given a hot (cold) extreme, the EF estimates tend to increase with increasing temperature of cold (hot) extreme, and decrease with decreasing temperature of cold (hot) extreme. The coefficient of determination between the EF estimates and the ground-based measurements depends principally on the capability of remotely sensed surface temperature (Ts) to capture EF (i.e., depending on the correlation between Ts and EF measurements), being slightly influenced by the end-members. Varying the end-members does not substantially affect the standard deviation and skewness of the EF frequency distributions from the same model at the watershed scale. However, different models generate markedly different EF frequency distributions due to differing model physics, especially the limiting edges of EF defined in the remotely sensed vegetation fraction (fc) and Ts space. In general, the end-members cannot be properly determined because (1) they do not

  15. CHF6001 I: a novel highly potent and selective phosphodiesterase 4 inhibitor with robust anti-inflammatory activity and suitable for topical pulmonary administration.

    PubMed

    Moretto, Nadia; Caruso, Paola; Bosco, Raffaella; Marchini, Gessica; Pastore, Fiorella; Armani, Elisabetta; Amari, Gabriele; Rizzi, Andrea; Ghidini, Eleonora; De Fanti, Renato; Capaldi, Carmelida; Carzaniga, Laura; Hirsch, Emilio; Buccellati, Carola; Sala, Angelo; Carnini, Chiara; Patacchini, Riccardo; Delcanale, Maurizio; Civelli, Maurizio; Villetti, Gino; Facchinetti, Fabrizio

    2015-03-01

    This study examined the pharmacologic characterization of CHF6001 [(S)-3,5-dichloro-4-(2-(3-(cyclopropylmethoxy)-4-(difluoromethoxy)phenyl)-2-(3-(cyclopropylmethoxy)-4-(methylsulfonamido)benzoyloxy)ethyl)pyridine 1-oxide], a novel phosphodiesterase (PDE)4 inhibitor designed for treating pulmonary inflammatory diseases via inhaled administration. CHF6001 was 7- and 923-fold more potent than roflumilast and cilomilast, respectively, in inhibiting PDE4 enzymatic activity (IC50 = 0.026 ± 0.006 nM). CHF6001 inhibited PDE4 isoforms A-D with equal potency, showed an elevated ratio of high-affinity rolipram binding site versus low-affinity rolipram binding site (i.e., >40) and displayed >20,000-fold selectivity versus PDE4 compared with a panel of PDEs. CHF6001 effectively inhibited (subnanomolar IC50 values) the release of tumor necrosis factor-α from human peripheral blood mononuclear cells, human acute monocytic leukemia cell line macrophages (THP-1), and rodent macrophages (RAW264.7 and NR8383). Moreover, CHF6001 potently inhibited the activation of oxidative burst in neutrophils and eosinophils, neutrophil chemotaxis, and the release of interferon-γ from CD4(+) T cells. In all these functional assays, CHF6001 was more potent than previously described PDE4 inhibitors, including roflumilast, UK-500,001 [2-(3,4-difluorophenoxy)-5-fluoro-N-((1S,4S)-4-(2-hydroxy-5-methylbenzamido)cyclohexyl)nicotinamide], and cilomilast, and it was comparable to GSK256066 [6-((3-(dimethylcarbamoyl)phenyl)sulfonyl)-4-((3-methoxyphenyl)amino)-8-methylquinoline-3-carboxamide]. When administered intratracheally to rats as a micronized dry powder, CHF6001 inhibited liposaccharide-induced pulmonary neutrophilia (ED50 = 0.205 μmol/kg) and leukocyte infiltration (ED50 = 0.188 μmol/kg) with an efficacy comparable to a high dose of budesonide (1 μmol/kg i.p.). In sum, CHF6001 has the potential to be an effective topical treatment of conditions associated with pulmonary inflammation, including

  16. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  17. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  18. Genomic selection & association mapping in rice: effect of trait genetic architecture, training population composition, marker number & statistical model on accuracy of rice genomic selection in elite, tropical rice breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its ef...

  19. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  20. Evaluation of the generality and accuracy of a new mesh morphing procedure for the human femur.

    PubMed

    Grassi, Lorenzo; Hraiech, Najah; Schileo, Enrico; Ansaloni, Mauro; Rochette, Michel; Viceconti, Marco

    2011-01-01

    Various papers described mesh morphing techniques for computational biomechanics, but none of them provided a quantitative assessment of generality, robustness, automation, and accuracy in predicting strains. This study aims to quantitatively evaluate the performance of a novel mesh-morphing algorithm. A mesh-morphing algorithm based on radial-basis functions and on manual selection of corresponding landmarks on template and target was developed. The periosteal geometries of 100 femurs were derived from a computed tomography scan database and used to test the algorithm generality in producing finite element (FE) morphed meshes. A published benchmark, consisting of eight femurs for which in vitro strain measurements and standard FE model strain prediction accuracy were available, was used to assess the accuracy of morphed FE models in predicting strains. Relevant parameters were identified to test the algorithm robustness to operative conditions. Time and effort needed were evaluated to define the algorithm degree of automation. Morphing was successful for 95% of the specimens, with mesh quality indicators comparable to those of standard FE meshes. Accuracy of the morphed meshes in predicting strains was good (R(2)>0.9, RMSE%<10%) and not statistically different from the standard meshes (p-value=0.1083). The algorithm was robust to inter- and intra-operator variability, target geometry refinement (p-value>0.05) and partially to the number of landmark used. Producing a morphed mesh starting from the triangularized geometry of the specimen requires on average 10 min. The proposed method is general, robust, automated, and accurate enough to be used in bone FE modelling from diagnostic data, and prospectively in applications such as statistical shape modelling. PMID:21036655

  1. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    SciTech Connect

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.; Vomel,Christof

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.

  2. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  3. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  4. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  5. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  6. Ruggedness and robustness testing.

    PubMed

    Dejaegher, Bieke; Heyden, Yvan Vander

    2007-07-27

    Due to the strict regulatory requirements, especially in pharmaceutical analysis, analysis results with an acceptable quality should be reported. Thus, a proper validation of the measurement method is required. In this context, ruggedness and robustness testing becomes increasingly more important. In this review, the definitions of ruggedness and robustness are given, followed by a short explanation of the different approaches applied to examine the ruggedness or the robustness of an analytical method. Then, case studies, describing ruggedness or robustness tests of high-performance liquid chromatographic (HPLC), capillary electrophoretic (CE), gas chromatographic (GC), supercritical fluid chromatographic (SFC), and ultra-performance liquid chromatographic (UPLC) assay methods, are critically reviewed and discussed. Mainly publications of the last 10 years are considered. PMID:17379230

  7. Approaches to robustness

    NASA Astrophysics Data System (ADS)

    Cox, Henry; Heaney, Kevin D.

    2003-04-01

    The term robustness in signal processing applications usually refers to approaches that are not degraded significantly when the assumptions that were invoked in defining the processing algorithm are no longer valid. Highly tuned algorithms that fall apart in real-world conditions are useless. The classic example is super-directive arrays of closely spaced elements. The very narrow beams and high directivity could be predicted under ideal conditions, could not be achieved under realistic conditions of amplitude, phase and position errors. The robust design tries to take into account the real environment as part of the optimization problem. This problem led to the introduction of the white noise gain constraint and diagonal loading in adaptive beam forming. Multiple linear constraints have been introduced in pursuit of robustness. Sonar systems such as towed arrays operate in less than ideal conditions, making robustness a concern. A special problem in sonar systems is failed array elements. This leads to severe degradation in beam patterns and bearing response patterns. Another robustness issue arises in matched field processing that uses an acoustic propagation model in the beamforming. Knowledge of the environmental parameters is usually limited. This paper reviews the various approaches to achieving robustness in sonar systems.

  8. Liquid chromatography-high resolution/ high accuracy (tandem) mass spectrometry-based identification of in vivo generated metabolites of the selective androgen receptor modulator ACP-105 for doping control purposes.

    PubMed

    Thevis, Mario; Thomas, Andreas; Piper, Thomas; Krug, Oliver; Delahaut, Philippe; Schänzer, Wilhelm

    2014-01-01

    Selective androgen receptor modulators (SARMs) represent an emerging class of therapeutics which have been prohibited in sport as anabolic agents according to the regulations of the World Anti-Doping Agency (WADA) since 2008. Within the past three years, numerous adverse analytical findings with SARMs in routine doping control samples have been reported despite missing clinical approval of these substances. Hence, preventive doping research concerning the metabolism and elimination of new therapeutic entities of the class of SARMs are vital for efficient and timely sports drug testing programs as banned compounds are most efficiently screened when viable targets (for example, characteristic metabolites) are identified. In the present study, the metabolism of ACP-105, a novel SARM drug candidate, was studied in vivo in rats. Following oral administration, urine samples were collected over a period of seven days and analyzed for metabolic products by Liquid chromatography-high resolution/high accuracy (tandem) mass spectrometry. Samples were subjected to enzymatic hydrolysis prior to liquid-liquid extraction and a total of seven major phase-I metabolites were detected, three of which were attributed to monohydroxylated and four to bishydroxylated ACP-105. The hydroxylation sites were assigned by means of diagnostic product ions and respective dissociation pathways of the analytes following positive or negative ionization and collisional activation as well as selective chemical derivatization. The identified metabolites were used as target compounds to investigate their traceability in a rat elimination urine samples study and monohydroxylated and bishydroxylated species were detectable for up to four and six days post-administration, respectively. PMID:24881457

  9. Mistranslation drives the evolution of robustness in TEM-1 β-lactamase

    PubMed Central

    Bratulic, Sinisa; Gerber, Florian; Wagner, Andreas

    2015-01-01

    How biological systems such as proteins achieve robustness to ubiquitous perturbations is a fundamental biological question. Such perturbations include errors that introduce phenotypic mutations into nascent proteins during the translation of mRNA. These errors are remarkably frequent. They are also costly, because they reduce protein stability and help create toxic misfolded proteins. Adaptive evolution might reduce these costs of protein mistranslation by two principal mechanisms. The first increases the accuracy of translation via synonymous “high fidelity” codons at especially sensitive sites. The second increases the robustness of proteins to phenotypic errors via amino acids that increase protein stability. To study how these mechanisms are exploited by populations evolving in the laboratory, we evolved the antibiotic resistance gene TEM-1 in Escherichia coli hosts with either normal or high rates of mistranslation. We analyzed TEM-1 populations that evolved under relaxed and stringent selection for antibiotic resistance by single molecule real-time sequencing. Under relaxed selection, mistranslating populations reduce mistranslation costs by reducing TEM-1 expression. Under stringent selection, they efficiently purge destabilizing amino acid changes. More importantly, they accumulate stabilizing amino acid changes rather than synonymous changes that increase translational accuracy. In the large populations we study, and on short evolutionary timescales, the path of least resistance in TEM-1 evolution consists of reducing the consequences of translation errors rather than the errors themselves. PMID:26392536

  10. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  11. Robust control of accelerators

    NASA Astrophysics Data System (ADS)

    Joel, W.; Johnson, D.; Chaouki, Abdallah T.

    1991-07-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.

  12. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  13. Biological robustness: paradigms, mechanisms, and systems principles.

    PubMed

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  14. Evolution, robustness, and the cost of complexity

    NASA Astrophysics Data System (ADS)

    Leclerc, Robert D.

    Evolutionary systems biology is the study of how regulatory networks evolve under the influence of natural selection, mutation, and the environment. It attempts to explain the dynamics, architecture, and variational properties of regulatory networks and how this relates to the origins, evolution and maintenance of complex and diverse functions. Key questions in the field of evolutionary systems biology ask how does robustness evolve, what are the factors that drive its evolution, and what are the underlying mechanisms that discharge robustness? In this dissertation, I investigate the evolution of robustness in artificial gene regulatory networks. I show how different conceptions of robustness fit together as pieces of a general notion of robustness, and I show how this relationship implies potential tradeoffs in how robustness can be implemented. I present results which suggest that inherent logistical problems with genetic recombination may help drive the evolution of modularity in the genotype-phenotype map. Finally, I show that robustness implies a parsimonious network structure, one which is sparsely connected and not unnecessarily complex. These results challenge conclusions drawn from many high-profile studies, and may offer a broad new perspective on biological systems. Because life must orchestrate its existence on random nonlinear thermodynamic processes, it will be designed and implemented in the most probable way. Life turns the law of entropy back onto itself to root out every inefficiency, waste, and every surprise.

  15. Biological Robustness: Paradigms, Mechanisms, and Systems Principles

    PubMed Central

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  16. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  17. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  18. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  19. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure. PMID:25974553

  20. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  1. Dealing with Outliers: Robust, Resistant Regression

    ERIC Educational Resources Information Center

    Glasser, Leslie

    2007-01-01

    Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…

  2. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  3. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609

  4. Robust Collaborative Recommendation

    NASA Astrophysics Data System (ADS)

    Burke, Robin; O'Mahony, Michael P.; Hurley, Neil J.

    Collaborative recommender systems are vulnerable to malicious users who seek to bias their output, causing them to recommend (or not recommend) particular items. This problem has been an active research topic since 2002. Researchers have found that the most widely-studied memory-based algorithms have significant vulnerabilities to attacks that can be fairly easily mounted. This chapter discusses these findings and the responses that have been investigated, especially detection of attack profiles and the implementation of robust recommendation algorithms.

  5. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  6. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  7. Robust impedance shaping telemanipulation

    SciTech Connect

    Colgate, J.E.

    1993-08-01

    When a human operator performs a task via a bilateral manipulator, the feel of the task is embodied in the mechanical impedance of the manipulator. Traditionally, a bilateral manipulator is designed for transparency; i.e., so that the impedance reflected through the manipulator closely approximates that of the task. Impedance shaping bilateral control, introduced here, differs in that it treats the bilateral manipulator as a means of constructively altering the impedance of a task. This concept is particularly valuable if the characteristic dimensions (e.g., force, length, time) of the task impedance are very different from those of the human limb. It is shown that a general form of impedance shaping control consists of a conventional power-scaling bilateral controller augmented with a real-time interactive task simulation (i.e., a virtual environment). An approach to impedance shaping based on kinematic similarity between tasks of different scale is introduced and illustrated with an example. It is shown that an important consideration in impedance shaping controller design is robustness; i.e., guaranteeing the stability of the operator/manipulator/task system. A general condition for the robustness of a bilateral manipulator is derived. This condition is based on the structured singular value ({mu}). An example of robust impedance shaping bilateral control is presented and discussed.

  8. Robustness of Interdependent Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    2011-03-01

    In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.

  9. Towards designing robust coupled networks

    PubMed Central

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-01-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy. PMID:23752705

  10. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  11. A robust multi-frame image blind deconvolution algorithm via total variation

    NASA Astrophysics Data System (ADS)

    Zhou, Haiyang; Xia, Guo; Liu, Qianshun; Yu, Feihong

    2015-10-01

    Image blind deconvolution is a more practical inverse problem in modern imaging sciences including consumer photography, astronomical imaging, medical imaging, and microscopy imaging. Among all of the latest blind deconvolution algorithms, the total variation based method provides privilege for large blur kernel. However, the computation cost is heavy and it does not handle the estimated kernel error properly. Otherwise, the using of the whole image to estimate the blur kernel is inaccurate because of that the insufficient edges information will hazard the accuracy of estimation. Here, we proposed a robust multi-frame images blind deconvolution algorithm to handle this complicated imaging model and applying it to the engineering community. In our proposed method, we induced the patch and kernel selection scheme to selecting the effective patch to estimate the kernel without using the whole image; then an total variation based kernel estimation algorithm was proposed to estimate the kernel; after the estimation of blur kernels, a new kernel refinement scheme was applied to refine the pre-estimated multi-frame estimated kernels; finally, a robust non-blind deconvolution method was implemented to recover the final latent sharp image with the refined blur kernel. Objective experiments on both synthesized and real images evaluate the efficiency and robustness of our algorithm and illustrate that this approach not only have rapid convergence but also can effectively recover high quality latent image from multi-blurry images.

  12. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  13. The effect of genetic robustness on evolvability in digital organisms

    PubMed Central

    2008-01-01

    Background Recent work has revealed that many biological systems keep functioning in the face of mutations and therefore can be considered genetically robust. However, several issues related to robustness remain poorly understood, such as its implications for evolvability (the ability to produce adaptive evolutionary innovations). Results Here, we use the Avida digital evolution platform to explore the effects of genetic robustness on evolvability. First, we obtained digital organisms with varying levels of robustness by evolving them under combinations of mutation rates and population sizes previously shown to select for different levels of robustness. Then, we assessed the ability of these organisms to adapt to novel environments in a variety of experimental conditions. The data consistently support that, for simple environments, genetic robustness fosters long-term evolvability, whereas, in the short-term, robustness is not beneficial for evolvability but may even be a counterproductive trait. For more complex environments, however, results are less conclusive. Conclusion The finding that the effect of robustness on evolvability is time-dependent is compatible with previous results obtained using RNA folding algorithms and transcriptional regulation models. A likely scenario is that, in the short-term, genetic robustness hampers evolvability because it reduces the intensity of selection, but that, in the long-term, relaxed selection facilitates the accumulation of genetic diversity and thus, promotes evolutionary innovation. PMID:18854018

  14. Robust maximum a posteriori image super-resolution

    NASA Astrophysics Data System (ADS)

    Vrigkas, Michalis; Nikou, Christophoros; Kondi, Lisimachos P.

    2014-07-01

    A global robust M-estimation scheme for maximum a posteriori (MAP) image super-resolution which efficiently addresses the presence of outliers in the low-resolution images is proposed. In iterative MAP image super-resolution, the objective function to be minimized involves the highly resolved image, a parameter controlling the step size of the iterative algorithm, and a parameter weighing the data fidelity term with respect to the smoothness term. Apart from the robust estimation of the high-resolution image, the contribution of the proposed method is twofold: (1) the robust computation of the regularization parameters controlling the relative strength of the prior with respect to the data fidelity term and (2) the robust estimation of the optimal step size in the update of the high-resolution image. Experimental results demonstrate that integrating these estimations into a robust framework leads to significant improvement in the accuracy of the high-resolution image.

  15. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  16. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  17. Complexity and robustness

    PubMed Central

    Carlson, J. M.; Doyle, John

    2002-01-01

    Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207

  18. Robust Systems Test Framework

    SciTech Connect

    Ballance, Robert A.

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.

  19. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  20. Robust Systems Test Framework

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less

  1. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-04-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}) . Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  2. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  3. Active relearning for robust supervised training of emphysema patterns.

    PubMed

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A; Bartholmai, Brian J; Robb, Richard A

    2014-08-01

    Radiologists are adept at recognizing the character and extent of lung parenchymal abnormalities in computed tomography (CT) scans. However, the inconsistent differential diagnosis due to subjective aggregation necessitates the exploration of automated classification based on supervised or unsupervised learning. The robustness of supervised learning depends on the training samples. Towards optimizing emphysema classification, we introduce a physician-in-the-loop feedback approach to minimize ambiguity in the selected training samples. An experienced thoracic radiologist selected 412 regions of interest (ROIs) across 15 datasets to represent 124, 129, 139 and 20 training samples of mild, moderate, severe emphysema and normal appearance, respectively. Using multi-view (multiple metrics to capture complementary features) inductive learning, an ensemble of seven un-optimized support vector models (SVM) each based on a specific metric was constructed in less than 6 s. The training samples were classified using seven SVM models and consensus labels were created using majority voting. In the active relearning phase, the ensemble-expert label conflicts were resolved by the expert. The efficacy and generality of active relearning feedback was assessed in the optimized parameter space of six general purpose classifiers across the seven dissimilarity metrics. The proposed just-in-time active relearning feedback with un-optimized SVMs yielded 15 % increase in classification accuracy and 25 % reduction in the number of support vectors. The average improvement in accuracy of six classifiers in their optimized parameter space was 21 %. The proposed cooperative feedback method enhances the quality of training samples used to construct automated classification of emphysematous CT scans. Such an approach could lead to substantial improvement in quantification of emphysema. PMID:24771303

  4. Robust control for uncertain structures

    NASA Technical Reports Server (NTRS)

    Douglas, Joel; Athans, Michael

    1991-01-01

    Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.

  5. Robustness and modeling error characterization

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.

    1984-01-01

    The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.

  6. Robustness in multicellular systems

    NASA Astrophysics Data System (ADS)

    Xavier, Joao

    2011-03-01

    Cells and organisms cope with the task of maintaining their phenotypes in the face of numerous challenges. Much attention has recently been paid to questions of how cells control molecular processes to ensure robustness. However, many biological functions are multicellular and depend on interactions, both physical and chemical, between cells. We use a combination of mathematical modeling and molecular biology experiments to investigate the features that convey robustness to multicellular systems. Cell populations must react to external perturbations by sensing environmental cues and acting coordinately in response. At the same time, they face a major challenge: the emergence of conflict from within. Multicellular traits are prone to cells with exploitative phenotypes that do not contribute to shared resources yet benefit from them. This is true in populations of single-cell organisms that have social lifestyles, where conflict can lead to the emergence of social ``cheaters,'' as well as in multicellular organisms, where conflict can lead to the evolution of cancer. I will describe features that diverse multicellular systems can have to eliminate potential conflicts as well as external perturbations.

  7. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  8. Fooled by local robustness.

    PubMed

    Sniedovich, Moshe

    2012-10-01

    One would have expected the considerable public debate created by Nassim Taleb's two best selling books on uncertainty, Fooled by Randomness and The Black Swan, to inspire greater caution to the fundamental difficulties posed by severe uncertainty. Yet, methodologies exhibiting an incautious approach to uncertainty have been proposed recently in a range of publications. So, the objective of this short note is to call attention to a prime example of an incautious approach to severe uncertainty that is manifested in the proposition to use the concept radius of stability as a measure of robustness against severe uncertainty. The central proposition of this approach, which is exemplified in info-gap decision theory, is this: use a simple radius of stability model to analyze and manage a severe uncertainty that is characterized by a vast uncertainty space, a poor point estimate, and a likelihood-free quantification of uncertainty. This short discussion serves then as a reminder that the generic radius of stability model is a model of local robustness. It is, therefore, utterly unsuitable for the treatment of severe uncertainty when the latter is characterized by a poor estimate of the parameter of interest, a vast uncertainty space, and a likelihood-free quantification of uncertainty. PMID:22384828

  9. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R

  10. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs. PMID:26080050

  11. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  12. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  13. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  14. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  15. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  16. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  17. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  18. Parallax-Robust Surveillance Video Stitching.

    PubMed

    He, Botao; Yu, Shaohua

    2015-01-01

    This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756

  19. Parallax-Robust Surveillance Video Stitching

    PubMed Central

    He, Botao; Yu, Shaohua

    2015-01-01

    This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756

  20. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  1. Robust Automatic Pectoral Muscle Segmentation from Mammograms Using Texture Gradient and Euclidean Distance Regression.

    PubMed

    Bora, Vibha Bafna; Kothari, Ashwin G; Keskar, Avinash G

    2016-02-01

    In computer-aided diagnosis (CAD) of mediolateral oblique (MLO) view of mammogram, the accuracy of tissue segmentation highly depends on the exclusion of pectoral muscle. Robust methods for such exclusions are essential as the normal presence of pectoral muscle can bias the decision of CAD. In this paper, a novel texture gradient-based approach for automatic segmentation of pectoral muscle is proposed. The pectoral edge is initially approximated to a straight line by applying Hough transform on Probable Texture Gradient (PTG) map of the mammogram followed by block averaging with the aid of approximated line. Furthermore, a smooth pectoral muscle curve is achieved with proposed Euclidean Distance Regression (EDR) technique and polynomial modeling. The algorithm is robust to texture and overlapping fibro glandular tissues. The method is validated with 340 MLO views from three databases-including 200 randomly selected scanned film images from miniMIAS, 100 computed radiography images and 40 full-field digital mammogram images. Qualitatively, 96.75 % of the pectoral muscles are segmented with an acceptable pectoral score index. The proposed method not only outperforms state-of-the-art approaches but also accurately quantifies the pectoral edge. Thus, its high accuracy and relatively quick processing time clearly justify its suitability for CAD. PMID:26259521

  2. Canonicalization of Feature Parameters for Robust Speech Recognition Based on Distinctive Phonetic Feature (DPF) Vectors

    NASA Astrophysics Data System (ADS)

    Huda, Mohammad Nurul; Ghulam, Muhammad; Fukuda, Takashi; Katsurada, Kouichi; Nitta, Tsuneo

    This paper describes a robust automatic speech recognition (ASR) system with less computation. Acoustic models of a hidden Markov model (HMM)-based classifier include various types of hidden factors such as speaker-specific characteristics, coarticulation, and an acoustic environment, etc. If there exists a canonicalization process that can recover the degraded margin of acoustic likelihoods between correct phonemes and other ones caused by hidden factors, the robustness of ASR systems can be improved. In this paper, we introduce a canonicalization method that is composed of multiple distinctive phonetic feature (DPF) extractors corresponding to each hidden factor canonicalization, and a DPF selector which selects an optimum DPF vector as an input of the HMM-based classifier. The proposed method resolves gender factors and speaker variability, and eliminates noise factors by applying the canonicalzation based on the DPF extractors and two-stage Wiener filtering. In the experiment on AURORA-2J, the proposed method provides higher word accuracy under clean training and significant improvement of word accuracy in low signal-to-noise ratio (SNR) under multi-condition training compared to a standard ASR system with mel frequency ceptral coeffient (MFCC) parameters. Moreover, the proposed method requires a reduced, two-fifth, Gaussian mixture components and less memory to achieve accurate ASR.

  3. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  4. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  5. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  6. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  7. Predicting the Accuracy of Protein–Ligand Docking on Homology Models

    PubMed Central

    BORDOGNA, ANNALISA; PANDINI, ALESSANDRO; BONATI, LAURA

    2011-01-01

    Ligand–protein docking is increasingly used in Drug Discovery. The initial limitations imposed by a reduced availability of target protein structures have been overcome by the use of theoretical models, especially those derived by homology modeling techniques. While this greatly extended the use of docking simulations, it also introduced the need for general and robust criteria to estimate the reliability of docking results given the model quality. To this end, a large-scale experiment was performed on a diverse set including experimental structures and homology models for a group of representative ligand–protein complexes. A wide spectrum of model quality was sampled using templates at different evolutionary distances and different strategies for target–template alignment and modeling. The obtained models were scored by a selection of the most used model quality indices. The binding geometries were generated using AutoDock, one of the most common docking programs. An important result of this study is that indeed quantitative and robust correlations exist between the accuracy of docking results and the model quality, especially in the binding site. Moreover, state-of-the-art indices for model quality assessment are already an effective tool for an a priori prediction of the accuracy of docking experiments in the context of groups of proteins with conserved structural characteristics. PMID:20607693

  8. Robust neuronal dynamics in premotor cortex during motor planning.

    PubMed

    Li, Nuo; Daie, Kayvon; Svoboda, Karel; Druckmann, Shaul

    2016-04-28

    Neural activity maintains representations that bridge past and future events, often over many seconds. Network models can produce persistent and ramping activity, but the positive feedback that is critical for these slow dynamics can cause sensitivity to perturbations. Here we use electrophysiology and optogenetic perturbations in the mouse premotor cortex to probe the robustness of persistent neural representations during motor planning. We show that preparatory activity is remarkably robust to large-scale unilateral silencing: detailed neural dynamics that drive specific future movements were quickly and selectively restored by the network. Selectivity did not recover after bilateral silencing of the premotor cortex. Perturbations to one hemisphere are thus corrected by information from the other hemisphere. Corpus callosum bisections demonstrated that premotor cortex hemispheres can maintain preparatory activity independently. Redundancy across selectively coupled modules, as we observed in the premotor cortex, is a hallmark of robust control systems. Network models incorporating these principles show robustness that is consistent with data. PMID:27074502

  9. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  10. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  11. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  12. Robust Face Sketch Style Synthesis.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li

    2016-01-01

    Heterogeneous image conversion is a critical issue in many computer vision tasks, among which example-based face sketch style synthesis provides a convenient way to make artistic effects for photos. However, existing face sketch style synthesis methods generate stylistic sketches depending on many photo-sketch pairs. This requirement limits the generalization ability of these methods to produce arbitrarily stylistic sketches. To handle such a drawback, we propose a robust face sketch style synthesis method, which can convert photos to arbitrarily stylistic sketches based on only one corresponding template sketch. In the proposed method, a sparse representation-based greedy search strategy is first applied to estimate an initial sketch. Then, multi-scale features and Euclidean distance are employed to select candidate image patches from the initial estimated sketch and the template sketch. In order to further refine the obtained candidate image patches, a multi-feature-based optimization model is introduced. Finally, by assembling the refined candidate image patches, the completed face sketch is obtained. To further enhance the quality of synthesized sketches, a cascaded regression strategy is adopted. Compared with the state-of-the-art face sketch synthesis methods, experimental results on several commonly used face sketch databases and celebrity photos demonstrate the effectiveness of the proposed method. PMID:26595919

  13. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  14. Robust image segmentation using local robust statistics and correntropy-based K-means clustering

    NASA Astrophysics Data System (ADS)

    Huang, Chencheng; Zeng, Li

    2015-03-01

    It is an important work to segment the real world images with intensity inhomogeneity such as magnetic resonance (MR) and computer tomography (CT) images. In practice, such images are often polluted by noise which make them difficult to be segmented by traditional level set based segmentation models. In this paper, we propose a robust level set image segmentation model combining local with global fitting energies to segment noised images. In the proposed model, the local fitting energy is based on the local robust statistics (LRS) information of an input image, which can efficiently reduce the effects of the noise, and the global fitting energy utilizes the correntropy-based K-means (CK) method, which can adaptively emphasize the samples that are close to their corresponding cluster centers. By integrating the advantages of global information and local robust statistics characteristics, the proposed model can efficiently segment images with intensity inhomogeneity and noise. Then, a level set regularization term is used to avoid re-initialization procedures in the process of curve evolution. In addition, the Gaussian filter is utilized to keep the level set smoothing in the curve evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. Experimental results show the advantages of our model in terms of accuracy and robustness to the noise. In particular, our method has been applied on some synthetic and real images with desirable results.

  15. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  16. Step Detection Robust against the Dynamics of Smartphones.

    PubMed

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  17. Step Detection Robust against the Dynamics of Smartphones

    PubMed Central

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  18. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing. PMID:26529374

  19. Robust reflective pupil slicing technology

    NASA Astrophysics Data System (ADS)

    Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.

    2014-07-01

    Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly

  20. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  1. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  2. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.

  3. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  4. Affecting speed and accuracy in perception.

    PubMed

    Bocanegra, Bruno R

    2014-12-01

    An account of affective modulations in perceptual speed and accuracy (ASAP: Affecting Speed and Accuracy in Perception) is proposed and tested. This account assumes an emotion-induced inhibitory interaction between parallel channels in the visual system that modulates the onset latencies and response durations of visual signals. By trading off speed and accuracy between channels, this mechanism achieves (a) fast visuo-motor responding to course-grained information, and (b) accurate visuo-attentional selection of fine-grained information. ASAP gives a functional account of previously counterintuitive findings, and may be useful for explaining affective influences in both featural-level single-stimulus tasks and object-level multistimulus tasks. PMID:24853268

  5. Mutational robustness emerges in a microscopic model of protein evolution

    NASA Astrophysics Data System (ADS)

    Zeldovich, Konstantin; Shakhnovich, Eugene

    2009-03-01

    The ability to absorb mutations while retaining structure and function, or mutational robustness, is a remarkable property of natural proteins. We use a computational model of organismic evolution [Zeldovich et al, PLOS Comp Biol 3(7):e139 (2007)], which explicitly couples protein physics and population dynamics, to study mutational robustness of evolved model proteins. We compare evolved sequences with the ones designed to fold into the same native structures and having the same thermodynamic stability, and find that evolved sequences are more robust against point mutations, being less likely to be destabilized, and more likely to increase stability upon a point mutation. These results point to sequence evolution as an important method of protein engineering if mutational robustness of the artificially developed proteins is desired. On the biological side, mutational robustness of proteins appears to be a natural consequence of the divergence-mutation- selection evolutionary process.

  6. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  7. Accuracy of administrative data for surveillance of healthcare-associated infections: a systematic review

    PubMed Central

    van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M

    2015-01-01

    Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued

  8. Diagnostic Accuracy of Xpert Test in Tuberculosis Detection: A Systematic Review and Meta-analysis

    PubMed Central

    Kaur, Ravdeep; Kachroo, Kavita; Sharma, Jitendar Kumar; Vatturi, Satyanarayana Murthy; Dang, Amit

    2016-01-01

    Background: World Health Organization (WHO) recommends the use of Xpert MTB/RIF assay for rapid diagnosis of tuberculosis (TB) and detection of rifampicin resistance. This systematic review was done to know about the diagnostic accuracy and cost-effectiveness of the Xpert MTB/RIF assay. Methods: A systematic literature search was conducted in following databases: Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, MEDLINE, PUBMED, Scopus, Science Direct and Google Scholar for relevant studies for studies published between 2010 and December 2014. Studies given in the systematic reviews were accessed separately and used for analysis. Selection of studies, data extraction and assessment of quality of included studies was performed independently by two reviewers. Studies evaluating the diagnostic accuracy of Xpert MTB/RIF assay among adult or predominantly adult patients (≥14 years), presumed to have pulmonary TB with or without HIV infection were included in the review. Also, studies that had assessed the diagnostic accuracy of Xpert MTB/RIF assay using sputum and other respiratory specimens were included. Results: The included studies had a low risk of any form of bias, showing that findings are of high scientific validity and credibility. Quantitative analysis of 37 included studies shows that Xpert MTB/RIF is an accurate diagnostic test for TB and detection of rifampicin resistance. Conclusion: Xpert MTB/RIF assay is a robust, sensitive and specific test for accurate diagnosis of tuberculosis as compared to conventional tests like culture and microscopic examination. PMID:27013842

  9. Robust efficient estimation of heart rate pulse from video

    PubMed Central

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  10. GPS baseline configuration design based on robustness analysis

    NASA Astrophysics Data System (ADS)

    Yetkin, M.; Berber, M.

    2012-11-01

    The robustness analysis results obtained from a Global Positioning System (GPS) network are dramatically influenced by the configurationof the observed baselines. The selection of optimal GPS baselines may allow for a cost effective survey campaign and a sufficiently robustnetwork. Furthermore, using the approach described in this paper, the required number of sessions, the baselines to be observed, and thesignificance levels for statistical testing and robustness analysis can be determined even before the GPS campaign starts. In this study, wepropose a robustness criterion for the optimal design of geodetic networks, and present a very simple and efficient algorithm based on thiscriterion for the selection of optimal GPS baselines. We also show the relationship between the number of sessions and the non-centralityparameter. Finally, a numerical example is given to verify the efficacy of the proposed approach.

  11. High accuracy fuel flowmeter

    NASA Technical Reports Server (NTRS)

    1986-01-01

    All three flowmeter concepts (vortex, dual turbine, and angular momentum) were subjected to experimental and analytical investigation to determine the potential portotype performance. The three concepts were subjected to a comprehensive rating. Eight parameters of performance were evaluated on a zero-to-ten scale, weighted, and summed. The relative ratings of the vortex, dual turbine, and angular momentum flowmeters are 0.71, 1.00, and 0.95, respectively. The dual turbine flowmeter concept was selected as the primary candidate and the angular momentum flowmeter as the secondary candidate for prototype development and evaluation.

  12. RSRE: RNA structural robustness evaluator.

    PubMed

    Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi

    2007-07-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615

  13. Pixel-level robust digital image correlation.

    PubMed

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2013-12-01

    Digital Image Correlation (DIC) is a well-established non-contact optical metrology method. It employs digital image analysis to extract the full-field displacements and strains that occur in objects subjected to external stresses. Despite recent DIC progress, many problematic areas which greatly affect accuracy and that can seldomly be avoided, received very little attention. Problems posed by the presence of sharp displacement discontinuities, reflections, object borders or edges can be linked to the analysed object's properties and deformation. Other problematic areas, such as image noise, localized reflections or shadows are related more to the image acquisition process. This paper proposes a new subset-based pixel-level robust DIC method for in-plane displacement measurement which addresses all of these problems in a straightforward and unified approach, significantly improving DIC measurement accuracy compared to classic approaches. The proposed approach minimizes a robust energy functional which adaptively weighs pixel differences in the motion estimation process. The aim is to limit the negative influence of pixels that present erroneous or inconsistent motions by enforcing local motion consistency. The proposed method is compared to the classic Newton-Raphson DIC method in terms of displacement accuracy in three experiments. The first experiment is numerical and presents three combined problems: sharp displacement discontinuities, missing image information and image noise. The second experiment is a real experiment in which a plastic specimen is developing a lateral crack due to the application of uniaxial stress. The region around the crack presents both reflections that saturate the image intensity levels leading to missing image information, as well as sharp motion discontinuities due to the plastic film rupturing. The third experiment compares the proposed and classic DIC approaches with generic computer vision optical flow methods using images from

  14. Discrimination networks for maximum selection.

    PubMed

    Jain, Brijnesh J; Wysotzki, Fritz

    2004-01-01

    We construct a novel discrimination network using differentiating units for maximum selection. In contrast to traditional competitive architectures like MAXNET the discrimination network does not only signal the winning unit, but also provides information about its evidence. In particular, we show that a discrimination network converges to a stable state within finite time and derive three characteristics: intensity normalization (P1), contrast enhancement (P2), and evidential response (P3). In order to improve the accuracy of the evidential response we incorporate distributed redundancy into the network. This leads to a system which is not only robust against failure of single units and noisy data, but also enables us to sharpen the focus on the problem given in terms of a more accurate evidential response. The proposed discrimination network can be regarded as a connectionist model for competitive learning by evidence. PMID:14690714

  15. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  16. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  17. Selection Intensity in Genetic Algorithms with Generation Gaps

    SciTech Connect

    Cantu-Paz, E.

    2000-01-19

    This paper presents calculations of the selection intensity of common selection and replacement methods used in genetic algorithms (GAs) with generation gaps. The selection intensity measures the increase of the average fitness of the population after selection, and it can be used to predict the average fitness of the population at each iteration as well as the number of steps until the population converges to a unique solution. In addition, the theory explains the fast convergence of some algorithms with small generation gaps. The accuracy of the calculations was verified experimentally with a simple test function. The results of this study facilitate comparisons between different algorithms, and provide a tool to adjust the selection pressure, which is indispensable to obtain robust algorithms.

  18. Robust scanner identification based on noise features

    NASA Astrophysics Data System (ADS)

    Gou, Hongmei; Swaminathan, Ashwin; Wu, Min

    2007-02-01

    A large portion of digital image data available today is acquired using digital cameras or scanners. While cameras allow digital reproduction of natural scenes, scanners are often used to capture hardcopy art in more controlled scenarios. This paper proposes a new technique for non-intrusive scanner model identification, which can be further extended to perform tampering detection on scanned images. Using only scanned image samples that contain arbitrary content, we construct a robust scanner identifier to determine the brand/model of the scanner used to capture each scanned image. The proposed scanner identifier is based on statistical features of scanning noise. We first analyze scanning noise from several angles, including through image de-noising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Experimental results demonstrate that the proposed method can effectively identify the correct scanner brands/models with high accuracy.

  19. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  20. Channel Selection Methods for the P300 Speller

    PubMed Central

    Colwell, K. A.; Ryan, D. B.; Throckmorton, C. S.; Sellers, E. W.; Collins, L. M.

    2014-01-01

    The P300 Speller brain-computer interface (BCI) allows a user to communicate without muscle activity by reading electrical signals on the scalp via electroencephalogram. Modern BCI systems use multiple electrodes (“channels”) to collect data, which has been shown to improve speller accuracy; however, system cost and setup time can increase substantially with the number of channels in use, so it is in the user’s interest to use a channel set of modest size. This constraint increases the importance of using an effective channel set, but current systems typically utilize the same channel montage for each user. We examine the effect of active channel selection for individuals on speller performance, using generalized standard feature-selection methods, and present a new channel selection method, termed Jumpwise Regression, that extends the Stepwise Linear Discriminant Analysis classifier. Simulating the selections of each method on real P300 Speller data, we obtain results demonstrating that active channel selection can improve speller accuracy for most users relative to a standard channel set, with particular benefit for users who experience low performance using the standard set. Of the methods tested, Jumpwise Regression offers accuracy gains similar to the best-performing feature-selection methods, and is robust enough for online use. PMID:24797224

  1. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  2. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  3. Dosimetry robustness with stochastic optimization

    NASA Astrophysics Data System (ADS)

    Nohadani, Omid; Seco, Joao; Martin, Benjamin C.; Bortfeld, Thomas

    2009-06-01

    All radiation therapy treatment planning relies on accurate dose calculation. Uncertainties in dosimetric prediction can significantly degrade an otherwise optimal plan. In this work, we introduce a robust optimization method which handles dosimetric errors and warrants for high-quality IMRT plans. Unlike other dose error estimations, we do not rely on the detailed knowledge about the sources of the uncertainty and use a generic error model based on random perturbation. This generality is sought in order to cope with a large variety of error sources. We demonstrate the method on a clinical case of lung cancer and show that our method provides plans that are more robust against dosimetric errors and are clinically acceptable. In fact, the robust plan exhibits a two-fold improved equivalent uniform dose compared to the non-robust but optimized plan. The achieved speedup will allow computationally extensive multi-criteria or beam-angle optimization approaches to warrant for dosimetrically relevant plans.

  4. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  5. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  6. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  7. Accurate and robust estimation of camera parameters using RANSAC

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He

    2013-03-01

    Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.

  8. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  9. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  10. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  11. Robustness of airline alliance route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Simo, Pep; Gonzalez-Prieto, David

    2015-05-01

    The aim of this study is to analyze the robustness of the three major airline alliances' (i.e., Star Alliance, oneworld and SkyTeam) route networks. Firstly, the normalization of a multi-scale measure of vulnerability is proposed in order to perform the analysis in networks with different sizes, i.e., number of nodes. An alternative node selection criterion is also proposed in order to study robustness and vulnerability of such complex networks, based on network efficiency. And lastly, a new procedure - the inverted adaptive strategy - is presented to sort the nodes in order to anticipate network breakdown. Finally, the robustness of the three alliance networks are analyzed with (1) a normalized multi-scale measure of vulnerability, (2) an adaptive strategy based on four different criteria and (3) an inverted adaptive strategy based on the efficiency criterion. The results show that Star Alliance has the most resilient route network, followed by SkyTeam and then oneworld. It was also shown that the inverted adaptive strategy based on the efficiency criterion - inverted efficiency - shows a great success in quickly breaking networks similar to that found with betweenness criterion but with even better results.

  12. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods. PMID:25291733

  13. Towards robust topology of sparsely sampled data.

    PubMed

    Correa, Carlos D; Lindstrom, Peter

    2011-12-01

    Sparse, irregular sampling is becoming a necessity for reconstructing large and high-dimensional signals. However, the analysis of this type of data remains a challenge. One issue is the robust selection of neighborhoods--a crucial part of analytic tools such as topological decomposition, clustering and gradient estimation. When extracting the topology of sparsely sampled data, common neighborhood strategies such as k-nearest neighbors may lead to inaccurate results, either due to missing neighborhood connections, which introduce false extrema, or due to spurious connections, which conceal true extrema. Other neighborhoods, such as the Delaunay triangulation, are costly to compute and store even in relatively low dimensions. In this paper, we address these issues. We present two new types of neighborhood graphs: a variation on and a generalization of empty region graphs, which considerably improve the robustness of neighborhood-based analysis tools, such as topological decomposition. Our findings suggest that these neighborhood graphs lead to more accurate topological representations of low- and high- dimensional data sets at relatively low cost, both in terms of storage and computation time. We describe the implications of our work in the analysis and visualization of scalar functions, and provide general strategies for computing and applying our neighborhood graphs towards robust data analysis. PMID:22034302

  14. S/HIC: Robust Identification of Soft and Hard Sweeps Using Machine Learning

    PubMed Central

    Schrider, Daniel R.; Kern, Andrew D.

    2016-01-01

    Detecting the targets of adaptive natural selection from whole genome sequencing data is a central problem for population genetics. However, to date most methods have shown sub-optimal performance under realistic demographic scenarios. Moreover, over the past decade there has been a renewed interest in determining the importance of selection from standing variation in adaptation of natural populations, yet very few methods for inferring this model of adaptation at the genome scale have been introduced. Here we introduce a new method, S/HIC, which uses supervised machine learning to precisely infer the location of both hard and soft selective sweeps. We show that S/HIC has unrivaled accuracy for detecting sweeps under demographic histories that are relevant to human populations, and distinguishing sweeps from linked as well as neutrally evolving regions. Moreover, we show that S/HIC is uniquely robust among its competitors to model misspecification. Thus, even if the true demographic model of a population differs catastrophically from that specified by the user, S/HIC still retains impressive discriminatory power. Finally, we apply S/HIC to the case of resequencing data from human chromosome 18 in a European population sample, and demonstrate that we can reliably recover selective sweeps that have been identified earlier using less specific and sensitive methods. PMID:26977894

  15. Uneven Genetic Robustness of HIV-1 Integrase

    PubMed Central

    Rihn, Suzannah J.; Hughes, Joseph; Wilson, Sam J.

    2014-01-01

    ABSTRACT Genetic robustness (tolerance of mutation) may be a naturally selected property in some viruses, because it should enhance adaptability. Robustness should be especially beneficial to viruses like HIV-1 that exhibit high mutation rates and exist in immunologically hostile environments. Surprisingly, however, the HIV-1 capsid protein (CA) exhibits extreme fragility. To determine whether fragility is a general property of HIV-1 proteins, we created a large library of random, single-amino-acid mutants in HIV-1 integrase (IN), covering >40% of amino acid positions. Despite similar degrees of sequence variation in naturally occurring IN and CA sequences, we found that HIV-1 IN was significantly more robust than CA, with random nonsilent IN mutations only half as likely to cause lethal defects. Interestingly, IN and CA were similar in that a subset of mutations with high in vitro fitness were rare in natural populations. IN mutations of this type were more likely to occur in the buried interior of the modeled HIV-1 intasome, suggesting that even very subtle fitness effects suppress variation in natural HIV-1 populations. Lethal mutations, in particular those that perturbed particle production, proteolytic processing, and particle-associated IN levels, were strikingly localized at specific IN subunit interfaces. This observation strongly suggests that binding interactions between particular IN subunits regulate proteolysis during HIV-1 virion morphogenesis. Overall, use of the IN mutant library in conjunction with structural models demonstrates the overall robustness of IN and highlights particular regions of vulnerability that may be targeted in therapeutic interventions. IMPORTANCE The HIV-1 integrase (IN) protein is responsible for the integration of the viral genome into the host cell chromosome. To measure the capacity of IN to maintain function in the face of mutation, and to probe structure/function relationships, we created a library of random single

  16. Robust process design and springback compensation of a decklid inner

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Grimm, Peter; Carleer, Bart; Jin, Weimin; Liu, Gang; Cheng, Yingchao

    2013-12-01

    Springback compensation is one of the key topics in current die face engineering. The accuracy of the springback simulation, the robustness of method planning and springback are considered to be the main factors which influences the effectiveness of springback compensation. In the present paper, the basic principles of springback compensation are presented firstly. These principles consist of an accurate full cycle simulation with final validation setting and the robust process design and optimization are discussed in detail via an industrial example, a decklid inner. Moreover, an effective compensation strategy is put forward based on the analysis of springback and the simulation based springback compensation is introduced in the phase of process design. In the end, the final verification and comparison in tryout and production is given in this paper, which verified that the methodology of robust springback compensation is effective during the die development.

  17. Robust image registration of biological microscopic images.

    PubMed

    Wang, Ching-Wei; Ka, Shuk-Man; Chen, Ann

    2014-01-01

    Image registration of biological data is challenging as complex deformation problems are common. Possible deformation effects can be caused in individual data preparation processes, involving morphological deformations, stain variations, stain artifacts, rotation, translation, and missing tissues. The combining deformation effects tend to make existing automatic registration methods perform poor. In our experiments on serial histopathological images, the six state of the art image registration techniques, including TrakEM2, SURF + affine transformation, UnwarpJ, bUnwarpJ, CLAHE + bUnwarpJ and BrainAligner, achieve no greater than 70% averaged accuracies, while the proposed method achieves 91.49% averaged accuracy. The proposed method has also been demonstrated to be significantly better in alignment of laser scanning microscope brain images and serial ssTEM images than the benchmark automatic approaches (p < 0.001). The contribution of this study is to introduce a fully automatic, robust and fast image registration method for 2D image registration. PMID:25116443

  18. Robust coarticulatory modeling for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Schwartz, R.; Chow, Y. L.; Dunham, M. O.; Kimball, O.; Krasner, M.; Kubala, F.; Makhoul, J.; Price, P.; Roucos, S.

    1986-10-01

    The purpose of this project is to perform research into algorithms for the automatic recognition of individual sounds or phonemes in continuous speech. The algorithms developed should be appropriate for understanding large-vocabulary continuous speech input and are to be made available to the Strategic Computing Program for incorporation in a complete word recognition system. This report describes process to date in developing phonetic models that are appropriate for continuous speech recognition. In continuous speech, the acoustic realization of each phoneme depends heavily on the preceding and following phonemes: a process known as coarticulation. Thus, while there are relatively few phonemes in English (on the order of fifty or so), the number of possible different accoustic realizations is in the thousands. Therefore, to develop high-accuracy recognition algorithms, one may need to develop literally thousands of relatively distance phonetic models to represent the various phonetic context adequately. Developing a large number of models usually necessitates having a large amount of speech to provide reliable estimates of the model parameters. The major contributions of this work are the development of: (1) A simple but powerful formalism for modeling phonemes in context; (2) Robust training methods for the reliable estimation of model parameters by utilizing the available speech training data in a maximally effective way; and (3) Efficient search strategies for phonetic recognition while maintaining high recognition accuracy.

  19. Highly Fluorinated Ir(III)-2,2':6',2″-Terpyridine-Phenylpyridine-X Complexes via Selective C-F Activation: Robust Photocatalysts for Solar Fuel Generation and Photoredox Catalysis.

    PubMed

    Porras, Jonathan A; Mills, Isaac N; Transue, Wesley J; Bernhard, Stefan

    2016-08-01

    A series of fluorinated Ir(III)-terpyridine-phenylpyridine-X (X = anionic monodentate ligand) complexes were synthesized by selective C-F activation, whereby perfluorinated phenylpyridines were readily complexed. The combination of fluorinated phenylpyridine ligands with an electron-rich tri-tert-butyl terpyridine ligand generates a "push-pull" force on the electrons upon excitation, imparting significant enhancements to the stability, electrochemical, and photophysical properties of the complexes. Application of the complexes as photosensitizers for photocatalytic generation of hydrogen from water and as redox photocatalysts for decarboxylative fluorination of several carboxylic acids showcases the performance of the complexes in highly coordinating solvents, in some cases exceeding that of the leading photosensitizers. Changes in the photophysical properties and the nature of the excited states are observed as the compounds increase in fluorination as well as upon exchange of the ancillary chloride ligand to a cyanide. These changes in the excited states have been corroborated using density functional theory modeling. PMID:27387149

  20. Three-dimensional robust diving guidance for hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Zhu, Jianwen; Liu, Luhua; Tang, Guojian; Bao, Weimin

    2016-01-01

    A novel three-dimensional robust guidance law based on H∞ filter and H∞ control is proposed to meet the constraints of the impact accuracy and the flight direction under process disturbances for the dive phase of hypersonic vehicle. Complete three-dimensional coupling relative motion equations are established and decoupled into linear ones by feedback linearization to simplify the design process of the further guidance law. Based on the linearized equations, H∞ filter is introduced to eliminate the measurement noises of line-of-sight angles and estimate the angular rates. Furthermore, H∞ robust control is well employed to design guidance law, and the filtered information is used to generate guidance commands to meet the guidance goal accurately and robustly. The simulation results of CAV-H indicate that the proposed three-dimensional equations can describe the coupling character more clearly than the traditional decoupling guidance, and the proposed guidance strategy can guide the vehicle to satisfy different multiple constraints with high accuracy and robustness.

  1. The efficacy of bedside chest ultrasound: from accuracy to outcomes.

    PubMed

    Hew, Mark; Tay, Tunn Ren

    2016-09-01

    For many respiratory physicians, point-of-care chest ultrasound is now an integral part of clinical practice. The diagnostic accuracy of ultrasound to detect abnormalities of the pleura, the lung parenchyma and the thoracic musculoskeletal system is well described. However, the efficacy of a test extends beyond just diagnostic accuracy. The true value of a test depends on the degree to which diagnostic accuracy efficacy influences decision-making efficacy, and the subsequent extent to which this impacts health outcome efficacy. We therefore reviewed the demonstrable levels of test efficacy for bedside ultrasound of the pleura, lung parenchyma and thoracic musculoskeletal system.For bedside ultrasound of the pleura, there is evidence supporting diagnostic accuracy efficacy, decision-making efficacy and health outcome efficacy, predominantly in guiding pleural interventions. For the lung parenchyma, chest ultrasound has an impact on diagnostic accuracy and decision-making for patients presenting with acute respiratory failure or breathlessness, but there are no data as yet on actual health outcomes. For ultrasound of the thoracic musculoskeletal system, there is robust evidence only for diagnostic accuracy efficacy.We therefore outline avenues to further validate bedside chest ultrasound beyond diagnostic accuracy, with an emphasis on confirming enhanced health outcomes. PMID:27581823

  2. Utilization of highly robust and selective crosslinked polymeric ionic liquid-based sorbent coatings in direct-immersion solid-phase microextraction and high-performance liquid chromatography for determining polar organic pollutants in waters.

    PubMed

    Pacheco-Fernández, Idaira; Najafi, Ali; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M

    2016-09-01

    Several crosslinked polymeric ionic liquid (PIL)-based sorbent coatings of different nature were prepared by UV polymerization onto nitinol wires. They were evaluated in a direct-immersion solid-phase microextraction (DI-SPME) method in combination with high-performance liquid chromatography (HPLC) and diode array detection (DAD). The studied PIL coatings contained either vinyl alkyl or vinylbenzyl imidazolium-based (ViCnIm- or ViBCnIm-) IL monomers with different anions, as well as different dicationic IL crosslinkers. The analytical performance of these PIL-based SPME coatings was firstly evaluated for the extraction of a group of 10 different model analytes, including hydrocarbons and phenols, while exhaustively comparing the performance with commercial SPME fibers such as polydimethylsyloxane (PDMS), polyacrylate (PA) and polydimethylsiloxane/divinylbenzene (PDMS/DVB), and using all fibers under optimized conditions. Those fibers exhibiting a high selectivity for polar compounds were selected to carry out an analytical method for a group of 5 alkylphenols, including bisphenol-A (BPA) and nonylphenol (n-NP). Under optimum conditions, average relative recoveries of 108% and inter-day precision values (3 non-consecutive days) lower than 19% were obtained for a spiked level of 10µgL(-1). Correlations coefficients for the overall method ranged between 0.990 and 0.999, and limits of detection were down to 1µgL(-1). Tap water, river water, and bottled water were analyzed to evaluate matrix effects. Comparison with the PA fiber was also performed in terms of analytical performance. Partition coefficients (logKfs) of the alkylphenols to the SPME coating varied from 1.69 to 2.45 for the most efficient PIL-based fiber, and from 1.58 to 2.30 for the PA fiber. These results agree with those obtained by the normalized calibration slopes, pointing out the affinity of these PILs-based coatings. PMID:27343586

  3. FTRAC--A robust fluoroscope tracking fiducial

    SciTech Connect

    Jain, Ameet Kumar; Mustafa, Tabish; Zhou, Yu; Burdette, Clif; Chirikjian, Gregory S.; Fichtinger, Gabor

    2005-10-15

    C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct three-dimensional (3D) information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the x-ray image, in 3D space. Optical/magnetic trackers tend to be prohibitively expensive, intrusive and cumbersome in many applications. We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of ellipses, lines, and points. This is an improvement over contemporary fiducials, which use only points. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A nonlinear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3x3x5 cm); need not be close to the anatomy of interest; and accurately segmentable. We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery in phantom experiments had an accuracy of 0.56 mm in translation and 0.33 deg. in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. The method offers accuracies similar to commercial tracking systems, and appears to be sufficiently robust for intraoperative quantitative C-arm fluoroscopy. Simulation experiments indicate that the size can be further reduced to 1x1x2 cm, with only a marginal drop in accuracy.

  4. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  5. Uncertainty analysis for regional-scale reserve selection.

    PubMed

    Moilanen, Atte; Wintle, Brendan A; Elith, Jane; Burgman, Mark

    2006-12-01

    Methods for reserve selection and conservation planning often ignore uncertainty. For example, presence-absence observations and predictions of habitat models are used as inputs but commonly assumed to be without error. We applied information-gap decision theory to develop uncertainty analysis methods for reserve selection. Our proposed method seeks a solution that is robust in achieving a given conservation target, despite uncertainty in the data. We maximized robustness in reserve selection through a novel method, "distribution discounting," in which the site- and species-specific measure of conservation value (related to species-specific occupancy probabilities) was penalized by an error measure (in our study, related to accuracy of statistical prediction). Because distribution discounting can be implemented as a modification of input files, it is a computationally efficient solution for implementing uncertainty analysis into reserve selection. Thus, the method is particularly useful for high-dimensional decision problems characteristic of regional conservation assessment. We implemented distribution discounting in the zonation reserve-selection algorithm that produces a hierarchy of conservation priorities throughout the landscape. We applied it to reserve selection for seven priority fauna in a landscape in New South Wales, Australia. The distribution discounting method can be easily adapted for use with different kinds of data (e.g., probability of occurrence or abundance) and different landscape descriptions (grid or patch based) and incorporated into other reserve-selection algorithms and software. PMID:17181804

  6. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  7. Robust adiabatic sum frequency conversion.

    PubMed

    Suchowski, Haim; Prabhudesai, Vaibhav; Oron, Dan; Arie, Ady; Silberberg, Yaron

    2009-07-20

    We discuss theoretically and demonstrate experimentally the robustness of the adiabatic sum frequency conversion method. This technique, borrowed from an analogous scheme of robust population transfer in atomic physics and nuclear magnetic resonance, enables the achievement of nearly full frequency conversion in a sum frequency generation process for a bandwidth up to two orders of magnitude wider than in conventional conversion schemes. We show that this scheme is robust to variations in the parameters of both the nonlinear crystal and of the incoming light. These include the crystal temperature, the frequency of the incoming field, the pump intensity, the crystal length and the angle of incidence. Also, we show that this extremely broad bandwidth can be tuned to higher or lower central wavelengths by changing either the pump frequency or the crystal temperature. The detailed study of the properties of this converter is done using the Landau-Zener theory dealing with the adiabatic transitions in two level systems. PMID:19654679

  8. Robustness of solutions to a benchmark control problem

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Marrison, Christopher I.

    1992-01-01

    The robustness of 10 solutions to a benchmark control design problem presented at the 1990 American Control Conference has been evaluated. The 10 controllers have second-to-eighth-order transfer functions and have been designed using several different methods, including H-infinity optimization, loop-transfer recovery, imaginary-axis shifting, constrained optimization, structured covariance, game theory, and the internal model principle. Stochastic robustness analysis quantifies the controllers' stability and performance robustness with structured uncertainties in up to six system parameters. The analysis provides insights into system response that are not readily derived from other robustness criteria and provides a common ground for judging controllers produced by alternative methods. One important conclusion is that gain and phase margins are not reliable indicators of the probability of instability. Furthermore, parameter variations actually may improve the likelihood of achieving selected performance metrics, as demonstrated by results for the probability of settling-time exceedance.

  9. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images

    PubMed Central

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  10. Spaceborne SAR data for global urban mapping at 30 m resolution using a robust urban extractor

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Jacob, Alexander; Gamba, Paolo

    2015-05-01

    With more than half of the world population now living in cities and 1.4 billion more people expected to move into cities by 2030, urban areas pose significant challenges on local, regional and global environment. Timely and accurate information on spatial distributions and temporal changes of urban areas are therefore needed to support sustainable development and environmental change research. The objective of this research is to evaluate spaceborne SAR data for improved global urban mapping using a robust processing chain, the KTH-Pavia Urban Extractor. The proposed processing chain includes urban extraction based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures, an existing method and several improvements i.e., SAR data preprocessing, enhancement, and post-processing. ENVISAT Advanced Synthetic Aperture Radar (ASAR) C-VV data at 30 m resolution were selected over 10 global cities and a rural area from six continents to demonstrate the robustness of the improved method. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from ENVISAT ASAR data and built-up areas can be mapped at 30 m resolution with very good accuracy using only one or two SAR images. These findings indicate that operational global urban mapping is possible with spaceborne SAR data, especially with the launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.

  11. A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

    PubMed Central

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589

  12. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images.

    PubMed

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  13. Splitting of overlapping nuclei guided by robust combinations of concavity points

    NASA Astrophysics Data System (ADS)

    Plissiti, Marina E.; Louka, Eleni; Nikou, Christophoros

    2014-03-01

    In this work, we propose a novel and robust method for the accurate separation of elliptical overlapped nuclei in microscopic images. The method is based on both the information provided by the global boundary of the nuclei cluster and the detection of concavity points along this boundary. The number of the nuclei and the area of each nucleus included in the cluster are estimated automatically by exploiting the different parts of the cluster boundary demarcated by the concavity points. More specifically, based on the set of concavity points detected in the image of the clustered nuclei, all the possible configurations of candidate ellipses that fit to them are estimated by least squares fitting. For each configuration, an index measuring the fitting residual is computed and the configuration providing the minimum error is selected. The method may successfully separate multiple (more than two) clustered nuclei as the fitting residual is a robust indicator of the number of overlapping elliptical structures even if many erroneous concavity points are present due to noise. Moreover, the algorithm has been evaluated on cytological images of conventional Pap smears and compares favorably with state of the art methods both in terms of accuracy and execution time.

  14. Meditation Experience Predicts Introspective Accuracy

    PubMed Central

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  15. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use. PMID:25773127

  16. Robust Sliding Window Synchronizer Developed

    NASA Technical Reports Server (NTRS)

    Chun, Kue S.; Xiong, Fuqin; Pinchak, Stanley

    2004-01-01

    The development of an advanced robust timing synchronization scheme is crucial for the support of two NASA programs--Advanced Air Transportation Technologies and Aviation Safety. A mobile aeronautical channel is a dynamic channel where various adverse effects--such as Doppler shift, multipath fading, and shadowing due to precipitation, landscape, foliage, and buildings--cause the loss of symbol timing synchronization.

  17. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  18. Mental Models: A Robust Definition

    ERIC Educational Resources Information Center

    Rook, Laura

    2013-01-01

    Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…

  19. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  20. Validation of diagnostic accuracy using digital slides in routine histopathology

    PubMed Central

    2012-01-01

    Background Robust hardware and software tools have been developed in digital microscopy during the past years for pathologists. Reports have been advocated the reliability of digital slides in routine diagnostics. We have designed a retrospective, comparative study to evaluate the scanning properties and digital slide based diagnostic accuracy. Methods 8 pathologists reevaluated 306 randomly selected cases from our archives. The slides were scanned with a 20× Plan-Apochromat objective, using a 3-chip Hitachi camera, resulting 0.465 μm/pixel resolution. Slide management was supported with dedicated Data Base and Viewer software tools. Pathologists used their office PCs for evaluation and reached the digital slides via intranet connection. The diagnostic coherency and uncertainty related to digital slides and scanning quality were analyzed. Results Good to excellent image quality of slides was recorded in 96%. In half of the critical 61 digital slides, poor image quality was related to section folds or floatings. In 88.2% of the studied cases the digital diagnoses were in full agreement with the consensus. Out of the overall 36 incoherent cases, 7 (2.3%) were graded relevant without any recorded uncertainty by the pathologist. Excluding the non-field specific cases from each pathologist's record this ratio was 1.76% of all cases. Conclusions Our results revealed that: 1) digital slide based histopathological diagnoses can be highly coherent with those using optical microscopy; 2) the competency of pathologists is a factor more important than the quality of digital slide; 3) poor digital slide quality do not endanger patient safety as these errors are recognizable by the pathologist and further actions for correction could be taken. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1913324336747310. PMID:22463804

  1. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  2. Evaluating LANDSAT wildland classification accuracies

    NASA Technical Reports Server (NTRS)

    Toll, D. L.

    1980-01-01

    Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.

  3. The accuracy of automatic tracking

    NASA Technical Reports Server (NTRS)

    Kastrov, V. V.

    1974-01-01

    It has been generally assumed that tracking accuracy changes similarly to the rate of change of the curve of the measurement conversion. The problem that internal noise increases along with the signals processed by the tracking device and that tracking accuracy thus drops were considered. The main prerequisite for solution is consideration of the dependences of the output signal of the tracking device sensor not only on the measured parameter but on the signal itself.

  4. Random Forest (RF) Wrappers for Waveband Selection and Classification of Hyperspectral Data.

    PubMed

    Poona, Nitesh Keshavelal; van Niekerk, Adriaan; Nadel, Ryan Leslie; Ismail, Riyad

    2016-02-01

    Hyperspectral data collected using a field spectroradiometer was used to model asymptomatic stress in Pinus radiata and Pinus patula seedlings infected with the pathogen Fusarium circinatum. Spectral data were analyzed using the random forest algorithm. To improve the classification accuracy of the model, subsets of wavebands were selected using three feature selection algorithms: (1) Boruta; (2) recursive feature elimination (RFE); and (3) area under the receiver operating characteristic curve of the random forest (AUC-RF). Results highlighted the robustness of the above feature selection methods when used in conjunction with the random forest algorithm for analyzing hyperspectral data. Overall, the Boruta feature selection algorithm provided the best results. When discriminating F. circinatum stress in Pinus radiata seedlings, Boruta selected wavebands (n = 69) yielded the best overall classification accuracies (training error of 17.00%, independent test error of 17.00% and an AUC value of 0.91). Classification results were, however, significantly lower for P. patula seedlings, with a training error of 24.00%, independent test error of 38.00%, and an AUC value of 0.65. A hybrid selection method that utilizes combinations of wavebands selected from the three feature selection algorithms was also tested. The hybrid method showed an improvement in classification accuracies for P. patula, and no improvement for P. radiata. The results of this study provide impetus towards implementing a hyperspectral framework for detecting stress within nursery environments. PMID:26903567

  5. Robustness of muscle synergies during visuomotor adaptation

    PubMed Central

    Gentner, Reinhard; Edmunds, Timothy; Pai, Dinesh K.; d'Avella, Andrea

    2013-01-01

    During visuomotor adaptation a novel mapping between visual targets and motor commands is gradually acquired. How muscle activation patterns are affected by this process is an open question. We tested whether the structure of muscle synergies is preserved during adaptation to a visuomotor rotation. Eight subjects applied targeted isometric forces on a handle instrumented with a force transducer while electromyographic (EMG) activity was recorded from 13 shoulder and elbow muscles. The recorded forces were mapped into horizontal displacements of a virtual sphere with simulated mass, elasticity, and damping. The task consisted of moving the sphere to a target at one of eight equally spaced directions. Subjects performed three baseline blocks of 32 trials, followed by six blocks with a 45° CW rotation applied to the planar force, and finally three wash-out blocks without the perturbation. The sphere position at 100 ms after movement onset revealed significant directional error at the beginning of the rotation, a gradual learning in subsequent blocks, and aftereffects at the beginning of the wash-out. The change in initial force direction was closely related to the change in directional tuning of the initial EMG activity of most muscles. Throughout the experiment muscle synergies extracted using a non-negative matrix factorization algorithm from the muscle patterns recorded during the baseline blocks could reconstruct the muscle patterns of all other blocks with an accuracy significantly higher than chance indicating structural robustness. In addition, the synergies extracted from individual blocks remained similar to the baseline synergies throughout the experiment. Thus synergy structure is robust during visuomotor adaptation suggesting that changes in muscle patterns are obtained by rotating the directional tuning of the synergy recruitment. PMID:24027524

  6. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  7. Principal Vision, Teacher Sense of Autonomy, and Environmental Robustness.

    ERIC Educational Resources Information Center

    Licata, Joseph W.; And Others

    1990-01-01

    Reports the testing of hypotheses about principal vision generated from Blumberg and Greenfield's (1986) qualitative studies of effective principals. Teachers tend to associate a robust principal with freedom to select the techniques of their work; the relationship between teacher sense of autonomy and principal vision is less clear. (JD)

  8. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators

    PubMed Central

    Kim, Pil-Jong; Kim, Hong-Gee

    2015-01-01

    Objectives The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). Materials and Methods The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Results Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Conclusions Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC. PMID:25984472

  9. Robust expertise effects in right FFA

    PubMed Central

    McGugin, Rankin Williams; Newton, Allen T; Gore, John C; Gauthier, Isabel

    2015-01-01

    The fusiform face area (FFA) is one of several areas in occipito-temporal cortex whose activity is correlated with perceptual expertise for objects. Here, we investigate the robustness of expertise effects in FFA and other areas to a strong task manipulation that increases both perceptual and attentional demands. With high-resolution fMRI at 7Telsa, we measured responses to images of cars, faces and a category globally visually similar to cars (sofas) in 26 subjects who varied in expertise with cars, in (a) a low load 1-back task with a single object category and (b) a high load task in which objects from two categories rapidly alternated and attention was required to both categories. The low load condition revealed several areas more active as a function of expertise, including both posterior and anterior portions of FFA bilaterally (FFA1/FFA2 respectively). Under high load, fewer areas were positively correlated with expertise and several areas were even negatively correlated, but the expertise effect in face-selective voxels in the anterior portion of FFA (FFA2) remained robust. Finally, we found that behavioral car expertise also predicted increased responses to sofa images but no behavioral advantages in sofa discrimination, suggesting that global shape similarity to a category of expertise is enough to elicit a response in FFA and other areas sensitive to experience, even when the category itself is not of special interest. The robustness of expertise effects in right FFA2 and the expertise effects driven by visual similarity both argue against attention being the sole determinant of expertise effects in extrastriate areas. PMID:25192631

  10. Robust video hashing via multilinear subspace projections.

    PubMed

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques. PMID:22752130

  11. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  12. Robust on-off pulse control of flexible space vehicles

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi

    1993-01-01

    The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.

  13. Robustness of ordinary least squares in randomized clinical trials.

    PubMed

    Judkins, David R; Porter, Kristin E

    2016-05-20

    There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors. PMID:26694758

  14. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  15. Robust optical alignment systems using geometric invariants

    NASA Astrophysics Data System (ADS)

    Ho, Tzung-Hsien; Rzasa, John; Milner, Stuart D.; Davis, Christopher C.

    2007-09-01

    Traditional coarse pointing, acquisition, and tracking (CPAT) systems are pre-calibrated to have the center pixel of the camera aligned to the laser pointing vector and the center pixel is manually moved to the target of interest to complete the alignment process. Such a system has previously demonstrated its capability in aligning with distant targets and the pointing accuracy is on the order of sensor resolution. However, aligning with targets at medium range where the distance between angular sensor and transceiver is not negligible is its Achilles Heel. This limitation can be resolved by imposing constraints, such as the trifocal tensor (TT), which is deduced from the geometrical dependence between cameras and transceivers. Two autonomous CPAT systems are introduced for FSO transceiver alignment in mid- and long-range scenarios. This work focuses on experimental results that validate the pointing performance for targets at different distances, backed up by the theoretical derivations. A mid-range CPAT system, applying a trifocal tensor as its geometric invariant, includes two perspective cameras as sensors to perceive target distances. The long-range CPAT system, applying linear mapping as the invariant, requires only one camera to determine the pointing angle. Calibration procedures for both systems are robust to measurement noise and the resulting system can autonomously point to a target of interest with a high accuracy, which is also on the order of sensor resolution. The results of this work are not only beneficial to the design of CPAT systems for FSO transceiver alignment, but also in new applications such as surveillance and navigation.

  16. Origin of Robustness in Generating Drug-Resistant Malaria Parasites

    PubMed Central

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H.; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W.; Kamchonwongpaisan, Sumalee; Fidock, David A.; Kirkman, Laura A.; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-01-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs. PMID:24739308

  17. Robust tumor morphometry in multispectral fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Tabesh, Ali; Vengrenyuk, Yevgen; Teverovskiy, Mikhail; Khan, Faisal M.; Sapir, Marina; Powell, Douglas; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo

    2009-02-01

    Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of 1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for predicting cancer recurrence (p <= 0.0001). In multivariate analysis, an MST feature was selected for a model incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set, which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.

  18. Robust Mosaicking of Uav Images with Narrow Overlaps

    NASA Astrophysics Data System (ADS)

    Kim, J.; Kim, T.; Shin, D.; Kim, S. H.

    2016-06-01

    This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.

  19. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  20. Robust dynamic mitigation of instabilities

    SciTech Connect

    Kawata, S.; Karino, T.

    2015-04-15

    A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.

  1. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  2. Robust flight control of rotorcraft

    NASA Astrophysics Data System (ADS)

    Pechner, Adam Daniel

    With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.

  3. Current Concept of Geometrical Accuracy

    NASA Astrophysics Data System (ADS)

    Görög, Augustín; Görögová, Ingrid

    2014-06-01

    Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutteŕs geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.

  4. ROBUST, SPECTRALLY SELECTIVE CERAMIC COATINGS FOR RECYCLED SOLAR POWER TUBES

    EPA Science Inventory

    Seven coating systems, listed in Table 1, were evaluated. Pemco U-3101 and Neo 126 are commercial enamel coatings commonly referred to as ground and cover coats, respectively. Ferro PL214 is a commercially available black enamel coating. Ferro XG-210 is a clear enamel coat...

  5. Robust Sparse Matching and Motion Estimation Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sohn, G.; Théau, J.; Ménard, P.

    2015-03-01

    In this paper, we propose a robust technique using genetic algorithm for detecting inliers and estimating accurate motion parameters from putative correspondences containing any percentage of outliers. The proposed technique aims to increase computational efficiency and modelling accuracy in comparison with the state-of-the-art via the following contributions: i) guided generation of initial populations for both avoiding degenerate solutions and increasing the rate of useful hypotheses, ii) replacing random search with evolutionary search, iii) possibility of evaluating the individuals of every population by parallel computation, iv) being performable on images with unknown internal orientation parameters, iv) estimating the motion model via detecting a minimum, however more than enough, set of inliers, v) ensuring the robustness of the motion model against outliers, degeneracy and poorperspective camera models, vi) making no assumptions about the probability distribution of inliers and/or outliers residuals from the estimated motion model, vii) detecting all the inliers by setting the threshold on their residuals adaptively with regard to the uncertainty of the estimated motion model and the position of the matches. The proposed method was evaluated both on synthetic data and real images. The results were compared with the most popular techniques from the state-of-the-art, including RANSAC, MSAC, MLESAC, Least Trimmed Squares and Least Median of Squares. Experimental results proved that the proposed approach perform better than others in terms of accuracy of motion estimation, accuracy of inlier detection and the computational efficiency.

  6. Robust learning-based parsing and annotation of medical radiographs.

    PubMed

    Tao, Yimo; Peng, Zhigang; Krishnan, Arun; Zhou, Xiang Sean

    2011-02-01

    In this paper, we propose a learning-based algorithm for automatic medical image annotation based on robust aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating a very high accuracy ( > 99.9%) for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% (Luo, , 2006). Our approach also achieved the best accuracies for a three-class and a multiclass radiograph annotation task, when compared with other state of the art algorithms. Our algorithm was used to enhance advanced image visualization workflows by enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for identified PA-AP chest images. Finally, we show that the same methodology could be utilized for several image parsing applications including anatomy/organ region of interest prediction and optimized image visualization. PMID:20876012

  7. Testing robustness of relative complexity measure method constructing robust phylogenetic trees for Galanthus L. Using the relative complexity measure

    PubMed Central

    2013-01-01

    Background Most phylogeny analysis methods based on molecular sequences use multiple alignment where the quality of the alignment, which is dependent on the alignment parameters, determines the accuracy of the resulting trees. Different parameter combinations chosen for the multiple alignment may result in different phylogenies. A new non-alignment based approach, Relative Complexity Measure (RCM), has been introduced to tackle this problem and proven to work in fungi and mitochondrial DNA. Result In this work, we present an application of the RCM method to reconstruct robust phylogenetic trees using sequence data for genus Galanthus obtained from different regions in Turkey. Phylogenies have been analyzed using nuclear and chloroplast DNA sequences. Results showed that, the tree obtained from nuclear ribosomal RNA gene sequences was more robust, while the tree obtained from the chloroplast DNA showed a higher degree of variation. Conclusions Phylogenies generated by Relative Complexity Measure were found to be robust and results of RCM were more reliable than the compared techniques. Particularly, to overcome MSA-based problems, RCM seems to be a reasonable way and a good alternative to MSA-based phylogenetic analysis. We believe our method will become a mainstream phylogeny construction method especially for the highly variable sequence families where the accuracy of the MSA heavily depends on the alignment parameters. PMID:23323678

  8. Robust defect segmentation in woven fabrics

    SciTech Connect

    Sari-Sarraf, H.; Goddard, J.S. Jr.

    1997-12-01

    This paper describes a robust segmentation algorithm for the detection and localization of woven fabric defects. The essence of the presented segmentation algorithm is the localization of those events (i.e., defects) in the input images that disrupt the global homogeneity of the background texture. To this end, preprocessing modules, based on the wavelet transform and edge fusion, are employed with the objective of attenuating the background texture and accentuating the defects. Then, texture features are utilized to measure the global homogeneity of the output images. If these images are deemed to be globally nonhomogeneous (i.e., defects are present), a local roughness measure is used to localize the defects. The utility of this algorithm can be extended beyond the specific application in this work, that is, defect segmentation in woven fabrics. Indeed, in a general sense, this algorithm can be used to detect and to localize anomalies that reside in images characterized by ordered texture. The efficacy of this algorithm has been tested thoroughly under realistic conditions and as a part of an on-line fabric inspection system. Using over 3700 images of fabrics, containing 26 different types of defects, the overall detection rate of this approach was 89% with a localization accuracy of less than 0.2 inches and a false alarm rate of 2.5%.

  9. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  10. COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy

    NASA Astrophysics Data System (ADS)

    Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun

    2014-09-01

    In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.

  11. Genotype by environment interaction and breeding for robustness in livestock

    PubMed Central

    Rauw, Wendy M.; Gomez-Raya, Luis

    2015-01-01

    The increasing size of the human population is projected to result in an increase in meat consumption. However, at the same time, the dominant position of meat as the center of meals is on the decline. Modern objections to the consumption of meat include public concerns with animal welfare in livestock production systems. Animal breeding practices have become part of the debate since it became recognized that animals in a population that have been selected for high production efficiency are more at risk for behavioral, physiological and immunological problems. As a solution, animal breeding practices need to include selection for robustness traits, which can be implemented through the use of reaction norms analysis, or though the direct inclusion of robustness traits in the breeding objective and in the selection index. This review gives an overview of genotype × environment interactions (the influence of the environment, reaction norms, phenotypic plasticity, canalization, and genetic homeostasis), reaction norms analysis in livestock production, options for selection for increased levels of production and against environmental sensitivity, and direct inclusion of robustness traits in the selection index. Ethical considerations of breeding for improved animal welfare are discussed. The discussion on animal breeding practices has been initiated and is very alive today. This positive trend is part of the sustainable food production movement that aims at feeding 9.15 billion people not just in the near future but also beyond. PMID:26539207

  12. Evaluation of selection index: application to the choice of an indirect multitrait selection index for soybean breeding.

    PubMed

    Bouchez, A; Goffinet, B

    1990-02-01

    Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information. PMID:24226228

  13. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  14. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  15. Contribution of Sample Processing to Variability and Accuracy of the Results of Pesticide Residue Analysis in Plant Commodities.

    PubMed

    Ambrus, Árpád; Buczkó, Judit; Hamow, Kamirán Á; Juhász, Viktor; Solymosné Majzik, Etelka; Szemánné Dobrik, Henriett; Szitás, Róbert

    2016-08-10

    Significant reduction of concentration of some pesticide residues and substantial increase of the uncertainty of the results derived from the homogenization of sample materials have been reported in scientific papers long ago. Nevertheless, performance of methods is frequently evaluated on the basis of only recovery tests, which exclude sample processing. We studied the effect of sample processing on accuracy and uncertainty of the measured residue values with lettuce, tomato, and maize grain samples applying mixtures of selected pesticides. The results indicate that the method is simple and robust and applicable in any pesticide residue laboratory. The analytes remaining in the final extract are influenced by their physical-chemical properties, the nature of the sample material, the temperature of comminution of sample, and the mass of test portion extracted. Consequently, validation protocols should include testing the effect of sample processing, and the performance of the complete method should be regularly checked within internal quality control. PMID:26755282

  16. ACCURACY AND TRACE ORGANIC ANALYSES

    EPA Science Inventory

    Accuracy in trace organic analysis presents a formidable problem to the residue chemist. He is confronted with the analysis of a large number and variety of compounds present in a multiplicity of substrates at levels as low as parts-per-trillion. At these levels, collection, isol...

  17. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  18. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually. PMID:21923052

  19. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  20. Robust kernel-based tracking with multiple subtemplates in vision guidance system.

    PubMed

    Yan, Yuzhuang; Huang, Xinsheng; Xu, Wanying; Shen, Lurong

    2012-01-01

    The mean shift algorithm has achieved considerable success in target tracking due to its simplicity and robustness. However, the lack of spatial information may result in its failure to get high tracking precision. This might be even worse when the target is scale variant and the sequences are gray-levels. This paper presents a novel multiple subtemplates based tracking algorithm for the terminal guidance application. By applying a separate tracker to each subtemplate, it can handle more complicated situations such as rotation, scaling, and partial coverage of the target. The innovations include: (1) an optimal subtemplates selection algorithm is designed, which ensures that the selected subtemplates maximally represent the information of the entire template while having the least mutual redundancy; (2) based on the serial tracking results and the spatial constraint prior to those subtemplates, a Gaussian weighted voting method is proposed to locate the target center; (3) the optimal scale factor is determined by maximizing the voting results among the scale searching layers, which avoids the complicated threshold setting problem. Experiments on some videos with static scenes show that the proposed method greatly improves the tracking accuracy compared to the original mean shift algorithm. PMID:22438749

  1. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  2. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  3. Recent Progress toward Robust Photocathodes

    SciTech Connect

    Mulhollan, G. A.; Bierman, J. C.

    2009-08-04

    RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.

  4. Assessing Predictive Properties of Genome-Wide Selection in Soybeans.

    PubMed

    Xavier, Alencar; Muir, William M; Rainey, Katy Martin

    2016-01-01

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786

  5. Assessing Predictive Properties of Genome-Wide Selection in Soybeans

    PubMed Central

    Xavier, Alencar; Muir, William M.; Rainey, Katy Martin

    2016-01-01

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786

  6. ALLMAPS: robust scaffold ordering based on multiple maps.

    PubMed

    Tang, Haibao; Zhang, Xingtan; Miao, Chenyong; Zhang, Jisen; Ming, Ray; Schnable, James C; Schnable, Patrick S; Lyons, Eric; Lu, Jianguo

    2015-01-01

    The ordering and orientation of genomic scaffolds to reconstruct chromosomes is an essential step during de novo genome assembly. Because this process utilizes various mapping techniques that each provides an independent line of evidence, a combination of multiple maps can improve the accuracy of the resulting chromosomal assemblies. We present ALLMAPS, a method capable of computing a scaffold ordering that maximizes colinearity across a collection of maps. ALLMAPS is robust against common mapping errors, and generates sequences that are maximally concordant with the input maps. ALLMAPS is a useful tool in building high-quality genome assemblies. ALLMAPS is available at: https://github.com/tanghaibao/jcvi/wiki/ALLMAPS . PMID:25583564

  7. Objective analysis of the Gulf Stream thermal front: methods and accuracy. Technical report

    SciTech Connect

    Tracey, K.L.; Friedlander, A.I.; Watts, R.

    1987-12-01

    The objective-analysis (OA) technique was adapted by Watts and Tracey in order to map the thermal frontal zone of the Gulf Stream. Here, the authors test the robustness of the adapted OA technique to the selection of four control parameters: mean field, standard deviation field, correlation function, and decimation time. Output OA maps of the thermocline depth are most affected by the choice of mean field, with the most-realistic results produced using a time-averaged mean. The choice of the space-time correlation function has a large influence on the size of the estimated error fields, which are associated with the OA maps. The smallest errors occur using the analytic function based on 4 years of inverted echo sounder data collected in the same region of the Gulf Stream. Variations in the selection of the standard deviation field and decimation time have little effect on the output OA maps. Accuracy of the output OA maps is determined by comparing them with independent measurements of the thermal field. Two cases are evaluated: standard maps and high-temporal-resolution maps, with decimation times of 2 days and 1 day, respectively. Standard deviations (STD) between the standard maps at the 15% estimated error level and the XBTs (AXBTs) are determined to be 47-53 m. Comparisons of the high-temporal-resolution maps at the 20% error level with the XBTs (AXBTs) give STD differences of 47 m.

  8. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  9. Robust Inflation from fibrous strings

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Cicoli, M.; de Alwis, S.; Quevedo, F.

    2016-05-01

    Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models—including the key (though often ignored) issue of modulus stabilisation—to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index ns of the form r propto (ns‑1)2 where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r lesssim 0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.

  10. Reliable and robust entanglement witness

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Mei, Quanxin; Zhou, Shan; Ma, Xiongfeng

    2016-04-01

    Entanglement, a critical resource for quantum information processing, needs to be witnessed in many practical scenarios. Theoretically, witnessing entanglement is by measuring a special Hermitian observable, called an entanglement witness (EW), which has non-negative expected outcomes for all separable states but can have negative expectations for certain entangled states. In practice, an EW implementation may suffer from two problems. The first one is reliability. Due to unreliable realization devices, a separable state could be falsely identified as an entangled one. The second problem relates to robustness. A witness may not be optimal for a target state and fail to identify its entanglement. To overcome the reliability problem, we employ a recently proposed measurement-device-independent entanglement witness scheme, in which the correctness of the conclusion is independent of the implemented measurement devices. In order to overcome the robustness problem, we optimize the EW to draw a better conclusion given certain experimental data. With the proposed EW scheme, where only data postprocessing needs to be modified compared to the original measurement-device-independent scheme, one can efficiently take advantage of the measurement results to maximally draw reliable conclusions.

  11. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  12. Attack robustness of cascading load model in interdependent networks

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Wu, Yuedan; Li, Yun

    2015-08-01

    Considering the weight of a node and the coupled strength of two interdependent nodes in the different networks, we propose a method to assign the initial load of a node and construct a new cascading load model in the interdependent networks. Assuming that a node in one network will fail if its degree is 0 or its dependent node in the other network is removed from the network or the load on it exceeds its capacity, we study the influences of the assortative link (AL) and the disassortative link (DL) patterns between two networks on the robustness of the interdependent networks against cascading failures. For better evaluating the network robustness, from the local perspective of a node we present a new measure to qualify the network resiliency after targeted attacks. We show that the AL patterns between two networks can improve the robust level of the entire interdependent networks. Moreover, we obtain how to efficiently allocate the initial load and select some nodes to be protected so as to maximize the network robustness against cascading failures. In addition, we find that some nodes with the lower load are more likely to trigger the cascading propagation when the distribution of the load is more even, and also give the reasonable explanation. Our findings can help to design the robust interdependent networks and give the reasonable suggestion to optimize the allocation of the protection resources.

  13. Phase error compensation methods for high-accuracy profile measurement

    NASA Astrophysics Data System (ADS)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun

    2016-04-01

    In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.

  14. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  15. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  16. Cost and accuracy of advanced breeding trial designs in apple

    PubMed Central

    Harshman, Julia M; Evans, Kate M; Hardner, Craig M

    2016-01-01

    Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717

  17. Development of an integrated computerized scheme for metaphase chromosome image analysis: a robustness experiment

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Wood, Marc C.; Yuan, Chaowei; Chen, Wei; Liu, Hong

    2008-02-01

    Our integrated computer-aided detection (CAD) scheme includes three basic modules. The first module detects whether a microscopic digital image depicts a metaphase chromosome cell. If a cell is detected, the scheme will justify whether it is analyzable with a decision tree. Once an analyzable cell is detected, the second module is applied to segment individual chromosomes and to compute two important features. Specifically, the scheme utilizes a modified thinning algorithm to identify the medial axis of a chromosome. By tracking perpendicular lines along the medial axis, the scheme computes four feature profiles, identifies centromeres, and assigns polarities of chromosomes based on a set of pre-optimized rules. The third module is followed to classify chromosomes into 24 types. In this module, each chromosome is initially represented by a vector of 31 features. A two-layer classifier with 8 artificial neural networks (ANN) is optimized by a genetic algorithm. A testing chromosome is first classified into one of the seven groups by the ANN in the first layer. Another ANN is then automatically selected from the seven ANNs in the second layer (one for each group) to further classify this chromosome into one of 24 types. To test the performance and robustness of this CAD scheme, we randomly selected and assembled an independent testing dataset. The dataset contains 100 microscopic digital images including 50 analyzable and 50 un-analyzable metphase cells identified by the experts. The centromere location, the corresponding polarity, and karyotype for each individual chromosome were recorded in the "truth" file. The performance of the CAD scheme applied to this image dataset is analyzed and compared with the results in the true file. The assessment accuracies are 93% for the first module, 90.8% for centromere identification and 93.2% for polarity assignment in the second module, over 96% for six chromosome groups and 81.8% for one group in the third module

  18. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  19. Robust spike classification based on frequency domain neural waveform features

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    Objective. We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. Approach. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. Main results. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. Significance. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm

  20. The Geometric Accuracy Validation of the ZY-3 Mapping Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Tang, X.; Zhang, G.; Zhu, X.

    2013-05-01

    ZiYuan-3 (ZY-3) mapping satellite is the first civilian high-resolution stereo mapping satellite of China. The satellite's objective is oriented towards plotting 1:50,000 and 1:25,000 topographic maps. This article proposes ZY-3 mapping satellite Rigorous Image Geometry Model and Rational Function Model (RFM). In addition, this paper utilizes the image of the ZY-3 satellite with the region of flatlands, hills and mountains for the block adjustment experiment. Different ground control points are selected and the accuracy is validated by check points, and the some Digital Surface Model (DSM), Digital Orthophoto Map (DOM) are generated and the accuracy is also validated by check points. The experiment reveals that the planar accuracy of DOM and vertical accuracy of DSM are better than 3m and 2 m, respectively. The experiment demonstrates the effectiveness of ZY-3 mapping satellite image geometry model.

  1. Surface Remeshing with Robust High-Order Reconstruction

    SciTech Connect

    Ray, Navamita; Delaney, Tristan; Einstein, Daniel R.; Jiao, Xiangmin

    2014-03-26

    Remeshing is an important problem in variety of applications, such as finite element methods and geometry processing. Surface remeshing poses some unique challenges, as it must deliver not only good mesh quality but also good geometric accuracy. For applications such as finite elements with high-order elements (quadratic or cubic elements), the geometry must be preserved to high-order (third-order or higher) accuracy, since low-order accuracy may undermine the convergence of numerical computations. The problem is particularly challenging if the CAD model is not available for the underlying geometry, and is even more so if the surface meshes contain some inverted elements. We describe remeshing strategies that can simultaneously produce high-quality triangular meshes, untangling mildly folded triangles and preserve the geometry to high-order of accuracy. Our approach extends our earlier works on high-order surface reconstruction and mesh optimization by enhancing its robustness with a geometric limiter for under-resolved geometries. We also integrate high-order surface reconstruction with surface mesh adaptation techniques, which alter the number of triangles and nodes. We demonstrate the utilization of our method to meshes for high-order finite elements, biomedical image-based surface meshes, and complex interface meshes in fluid simulations.

  2. A robust method for online stereo camera self-calibration in unmanned vehicle system

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Chihara, Nobuhiro; Guo, Tao; Kimura, Nobutaka

    2014-06-01

    Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature point's registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as

  3. Robust matching algorithm for image mosaic

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Tan, Jiu-bin

    2010-08-01

    In order to improve the matching accuracy and the level of automation for image mosaic, a matching algorithm based on SIFT (Scale Invariant Feature Transform) features is proposed as detailed below. Firstly, according to the result of cursory comparison with the given basal matching threshold, the collection corresponding SIFT features which contains mismatch is obtained. Secondly, after calculating all the ratio of Euclidean distance from the closest neighbor to the distance of the second closest of corresponding features, we select the image coordinates of corresponding SIFT features with the first eight smallest ratios to solve the initial parameters of pin-hole camera model, and then calculate maximum error σ between transformation coordinates and original image coordinates of the eight corresponding features. Thirdly, calculating the scale of the largest original image coordinates of the eight corresponding features to the entire image size, the scale is regarded as control parameter k of matching error threshold. Finally, computing the difference of the transformation coordinates and the original image coordinates of all the features in the collection of features, deleting the corresponding features with difference larger than 3kσ. We can then obtain the exact collection of matching features to solve the parameters for pin-hole camera model. Experimental results indicate that the proposed method is stable and reliable in case of the image having some variation of view point, illumination, rotation and scale. This new method has been used to achieve an excellent matching accuracy on the experimental images. Moreover, the proposed method can be used to select the matching threshold of different images automatically without any manual intervention.

  4. Mechanisms of mutational robustness in transcriptional regulation

    PubMed Central

    Payne, Joshua L.; Wagner, Andreas

    2015-01-01

    Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194

  5. The structure of robust observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1975-01-01

    Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.

  6. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  7. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  8. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. PMID:22109441

  9. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  10. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  11. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  12. Curation accuracy of model organism databases.

    PubMed

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819

  13. Accuracy assessment of landslide prediction models

    NASA Astrophysics Data System (ADS)

    Othman, A. N.; Mohd, W. M. N. W.; Noraini, S.

    2014-02-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones.

  14. From transcriptional landscapes to the identification of biomarkers for robustness

    PubMed Central

    2011-01-01

    The ability of microorganisms to adapt to changing environments and gain cell robustness, challenges the prediction of their history-dependent behaviour. Using our model organism Bacillus cereus, a notorious Gram-positive food spoilage and pathogenic spore-forming bacterium, a strategy will be described that allows for identification of biomarkers for robustness. First an overview will be presented of its two-component systems that generally include a transmembrane sensor histidine kinase and its cognate response regulator, allowing rapid and robust responses to fluctuations in the environment. The role of the multisensor hybrid kinase RsbK and the PP2C-type phosphatase RsbY system in activation of the general stress sigma factor σB is highlighted. An extensive comparative analysis of transcriptional landscapes derived from B. cereus exposed to mild stress conditions such as heat, acid, salt and oxidative stress, revealed that, amongst others σB regulated genes were induced in most conditions tested. The information derived from the transcriptome data was subsequently implemented in a framework for identifying and selecting cellular biomarkers at their mRNA, protein and/or activity level, for mild stressinduced microbial robustness towards lethal stresses. Exposure of unstressed and mild stress-adapted cells to subsequent lethal stress conditions (heat, acid and oxidative stress) allowed for quantification of the robustness advantage provided by mild stress pretreatment using the plate-count method. The induction levels of the selected candidate-biomarkers, σB protein, catalase activity and transcripts of certain proteases upon mild stress treatment, were significantly correlated to mild stress-induced enhanced robustness towards lethal thermal, oxidative and acid stresses, and were therefore suitable to predict these adaptive traits. Cellular biomarkers that are quantitatively correlated to adaptive behavior will facilitate our ability to predict the impact of

  15. From transcriptional landscapes to the identification of biomarkers for robustness.

    PubMed

    Abee, Tjakko; Wels, Michiel; de Been, Mark; den Besten, Heidy

    2011-08-30

    The ability of microorganisms to adapt to changing environments and gain cell robustness, challenges the prediction of their history-dependent behaviour. Using our model organism Bacillus cereus, a notorious Gram-positive food spoilage and pathogenic spore-forming bacterium, a strategy will be described that allows for identification of biomarkers for robustness. First an overview will be presented of its two-component systems that generally include a transmembrane sensor histidine kinase and its cognate response regulator, allowing rapid and robust responses to fluctuations in the environment. The role of the multisensor hybrid kinase RsbK and the PP2C-type phosphatase RsbY system in activation of the general stress sigma factor σB is highlighted. An extensive comparative analysis of transcriptional landscapes derived from B. cereus exposed to mild stress conditions such as heat, acid, salt and oxidative stress, revealed that, amongst others σB regulated genes were induced in most conditions tested. The information derived from the transcriptome data was subsequently implemented in a framework for identifying and selecting cellular biomarkers at their mRNA, protein and/or activity level, for mild stressinduced microbial robustness towards lethal stresses. Exposure of unstressed and mild stress-adapted cells to subsequent lethal stress conditions (heat, acid and oxidative stress) allowed for quantification of the robustness advantage provided by mild stress pretreatment using the plate-count method. The induction levels of the selected candidate-biomarkers, σB protein, catalase activity and transcripts of certain proteases upon mild stress treatment, were significantly correlated to mild stress-induced enhanced robustness towards lethal thermal, oxidative and acid stresses, and were therefore suitable to predict these adaptive traits. Cellular biomarkers that are quantitatively correlated to adaptive behavior will facilitate our ability to predict the impact of

  16. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  17. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  18. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  19. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  20. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  1. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  2. JPEG quantization table mismatched steganalysis via robust discriminative feature transformation

    NASA Astrophysics Data System (ADS)

    Zeng, Likai; Kong, Xiangwei; Li, Ming; Guo, Yanqing

    2015-03-01

    The cover source mismatch is a common problem in steganalysis, which may result in the degradation of detection accuracy. In this paper, we present a novel method to mitigate the problem of JPEG quantization table mismatch, named as Robust Discriminative Feature Transformation (RDFT). RDFT transforms original features to new feature representations based on a non-linear transformation matrix. It can improve the statistical consistency of the training samples and testing samples and learn new matched feature representations from original features by minimizing feature distribution difference while preserving the classification ability of training data. The comparison to prior arts reveals that the detection accuracy of the proposed RDFT algorithm can significantly outperform traditional steganalyzers under mismatched conditions and it is close to that of matched scenario. RDFT has several appealing advantages: 1) it can improve the statistical consistency of the training and testing data; 2) it can reduce the distribution difference between the training features and testing features; 3) it can preserve the classification ability of the training data; 4) it is robust to parameters and can achieve a good performance under a wide range of parameter values.

  3. Measuring the robustness of link prediction algorithms under noisy environment

    PubMed Central

    Zhang, Peng; Wang, Xiang; Wang, Futian; Zeng, An; Xiao, Jinghua

    2016-01-01

    Link prediction in complex networks is to estimate the likelihood of two nodes to interact with each other in the future. As this problem has applications in a large number of real systems, many link prediction methods have been proposed. However, the validation of these methods is so far mainly conducted in the assumed noise-free networks. Therefore, we still miss a clear understanding of how the prediction results would be affected if the observed network data is no longer accurate. In this paper, we comprehensively study the robustness of the existing link prediction algorithms in the real networks where some links are missing, fake or swapped with other links. We find that missing links are more destructive than fake and swapped links for prediction accuracy. An index is proposed to quantify the robustness of the link prediction methods. Among the twenty-two studied link prediction methods, we find that though some methods have low prediction accuracy, they tend to perform reliably in the “noisy” environment. PMID:26733156

  4. Measuring the robustness of link prediction algorithms under noisy environment

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Wang, Xiang; Wang, Futian; Zeng, An; Xiao, Jinghua

    2016-01-01

    Link prediction in complex networks is to estimate the likelihood of two nodes to interact with each other in the future. As this problem has applications in a large number of real systems, many link prediction methods have been proposed. However, the validation of these methods is so far mainly conducted in the assumed noise-free networks. Therefore, we still miss a clear understanding of how the prediction results would be affected if the observed network data is no longer accurate. In this paper, we comprehensively study the robustness of the existing link prediction algorithms in the real networks where some links are missing, fake or swapped with other links. We find that missing links are more destructive than fake and swapped links for prediction accuracy. An index is proposed to quantify the robustness of the link prediction methods. Among the twenty-two studied link prediction methods, we find that though some methods have low prediction accuracy, they tend to perform reliably in the “noisy” environment.

  5. Measuring the robustness of link prediction algorithms under noisy environment.

    PubMed

    Zhang, Peng; Wang, Xiang; Wang, Futian; Zeng, An; Xiao, Jinghua

    2016-01-01

    Link prediction in complex networks is to estimate the likelihood of two nodes to interact with each other in the future. As this problem has applications in a large number of real systems, many link prediction methods have been proposed. However, the validation of these methods is so far mainly conducted in the assumed noise-free networks. Therefore, we still miss a clear understanding of how the prediction results would be affected if the observed network data is no longer accurate. In this paper, we comprehensively study the robustness of the existing link prediction algorithms in the real networks where some links are missing, fake or swapped with other links. We find that missing links are more destructive than fake and swapped links for prediction accuracy. An index is proposed to quantify the robustness of the link prediction methods. Among the twenty-two studied link prediction methods, we find that though some methods have low prediction accuracy, they tend to perform reliably in the "noisy" environment. PMID:26733156

  6. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  7. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future. PMID:24691198

  8. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  9. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  10. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  11. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  12. A Fast and Robust Ellipse-Detection Method Based on Sorted Merging

    PubMed Central

    Ren, Guanghui; Zhao, Yaqin; Jiang, Lihui

    2014-01-01

    A fast and robust ellipse-detection method based on sorted merging is proposed in this paper. This method first represents the edge bitmap approximately with a set of line segments and then gradually merges the line segments into elliptical arcs and ellipses. To achieve high accuracy, a sorted merging strategy is proposed: the merging degrees of line segments/elliptical arcs are estimated, and line segments/elliptical arcs are merged in descending order of the merging degrees, which significantly improves the merging accuracy. During the merging process, multiple properties of ellipses are utilized to filter line segment/elliptical arc pairs, making the method very efficient. In addition, an ellipse-fitting method is proposed that restricts the maximum ratio of the semimajor axis and the semiminor axis, further improving the merging accuracy. Experimental results indicate that the proposed method is robust to outliers, noise, and partial occlusion and is fast enough for real-time applications. PMID:24782661

  13. Collaborative double robust targeted maximum likelihood estimation.

    PubMed

    van der Laan, Mark J; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q(0) in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for "collaborative double robustness," demonstrating that the collaborative targeted maximum

  14. Collaborative Double Robust Targeted Maximum Likelihood Estimation*

    PubMed Central

    van der Laan, Mark J.; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum

  15. Robust template matching using run-length encoding

    NASA Astrophysics Data System (ADS)

    Lee, Hunsue; Suh, Sungho; Cho, Hansang

    2015-09-01

    In this paper we propose a novel template matching algorithm for visual inspection of bare printed circuit board (PCB).1 In the conventional template matching for PCB inspection, the matching score and its relevant offsets are acquired by calculating the maximum value among the convolutions of template image and camera image. While the method is fast, the robustness and accuracy of matching are not guaranteed due to the gap between a design and an implementation resulting from defects and process variations. To resolve this problem, we suggest a new method which uses run-length encoding (RLE). For the template image to be matched, we accumulate data of foreground and background, and RLE data for each row and column in the template image. Using the data, we can find the x and y offsets which minimize the optimization function. The efficiency and robustness of the proposed algorithm are verified through a series of experiments. By comparing the proposed algorithm with the conventional approach, we could realize that the proposed algorithm is not only fast but also more robust and reliable in matching results.

  16. Robust Optimization of Alginate-Carbopol 940 Bead Formulations

    PubMed Central

    López-Cacho, J. M.; González-R, Pedro L.; Talero, B.; Rabasco, A. M.; González-Rodríguez, M. L.

    2012-01-01

    Formulation process is a very complex activity which sometimes implicates taking decisions about parameters or variables to obtain the best results in a high variability or uncertainty context. Therefore, robust optimization tools can be very useful for obtaining high quality formulations. This paper proposes the optimization of different responses through the robust Taguchi method. Each response was evaluated like a noise variable, allowing the application of Taguchi techniques to obtain a response under the point of view of the signal to noise ratio. A L18 Taguchi orthogonal array design was employed to investigate the effect of eight independent variables involved in the formulation of alginate-Carbopol beads. Responses evaluated were related to drug release profile from beads (t50% and AUC), swelling performance, encapsulation efficiency, shape and size parameters. Confirmation tests to verify the prediction model were carried out and the obtained results were very similar to those predicted in every profile. Results reveal that the robust optimization is a very useful approach that allows greater precision and accuracy to the desired value. PMID:22645438

  17. Superfast robust digital image correlation analysis with parallel computing

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Tian, Long

    2015-03-01

    Existing digital image correlation (DIC) using the robust reliability-guided displacement tracking (RGDT) strategy for full-field displacement measurement is a path-dependent process that can only be executed sequentially. This path-dependent tracking strategy not only limits the potential of DIC for further improvement of its computational efficiency but also wastes the parallel computing power of modern computers with multicore processors. To maintain the robustness of the existing RGDT strategy and to overcome its deficiency, an improved RGDT strategy using a two-section tracking scheme is proposed. In the improved RGDT strategy, the calculated points with correlation coefficients higher than a preset threshold are all taken as reliably computed points and given the same priority to extend the correlation analysis to their neighbors. Thus, DIC calculation is first executed in parallel at multiple points by separate independent threads. Then for the few calculated points with correlation coefficients smaller than the threshold, DIC analysis using existing RGDT strategy is adopted. Benefiting from the improved RGDT strategy and the multithread computing, superfast DIC analysis can be accomplished without sacrificing its robustness and accuracy. Experimental results show that the presented parallel DIC method performed on a common eight-core laptop can achieve about a 7 times speedup.

  18. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  19. Robust dynamical decoupling sequences for individual-nuclear-spin addressing

    NASA Astrophysics Data System (ADS)

    Casanova, J.; Wang, Z.-Y.; Haase, J. F.; Plenio, M. B.

    2015-10-01

    We propose the use of non-equally-spaced decoupling pulses for high-resolution selective addressing of nuclear spins by a quantum sensor. The analytical model of the basic operating principle is supplemented by detailed numerical studies that demonstrate the high degree of selectivity and the robustness against static and dynamic control-field errors of this scheme. We exemplify our protocol with a nitrogen-vacancy-center-based sensor to demonstrate that it enables the identification of individual nuclear spins that form part of a large spin ensemble.

  20. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  1. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  2. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  3. Fast Robust PCA on Graphs

    NASA Astrophysics Data System (ADS)

    Shahid, Nauman; Perraudin, Nathanael; Kalofolias, Vassilis; Puy, Gilles; Vandergheynst, Pierre

    2016-06-01

    Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data.

  4. Nanotechnology Based Environmentally Robust Primers

    SciTech Connect

    Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L

    2003-03-18

    An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.

  5. A Robust, Microwave Rain Gauge

    NASA Astrophysics Data System (ADS)

    Mansheim, T. J.; Niemeier, J. J.; Kruger, A.

    2008-12-01

    Researchers at The University of Iowa have developed an all-electronic rain gauge that uses microwave sensors operating at either 10 GHz or 23 GHz, and measures the Doppler shift caused by falling raindrops. It is straightforward to interface these sensors with conventional data loggers, or integrate them into a wireless sensor network. A disadvantage of these microwave rain gauges is that they consume significant power when they are operating. However, this may be partially negated by using data loggers' or sensors networks' sleep-wake-sleep mechanism. Advantages of the microwave rain gauges are that one can make them very robust, they cannot clog, they don't have mechanical parts that wear out, and they don't have to be perfectly level. Prototype microwave rain gauges were collocated with tipping-bucket rain gauges, and data were collected for two seasons. At higher rain rates, microwave rain gauge measurements compare well with tipping-bucket measurements. At lower rain rates, the microwave rain gauges provide more detailed information than tipping buckets, which quantize measurement typically in 1 tip per 0.01 inch, or 1 tip per mm of rainfall.

  6. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    PubMed

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-01-01

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors. PMID:27171081

  7. MTC: A Fast and Robust Graph-Based Transductive Learning Method.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Geng, Guang-Gang; Liu, Cheng-Lin

    2015-09-01

    Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness. PMID:25376047

  8. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor

    PubMed Central

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-01-01

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers’ misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors. PMID:27171081

  9. Robust lineage reconstruction from high-dimensional single-cell data.

    PubMed

    Giecold, Gregory; Marco, Eugenio; Garcia, Sara P; Trippa, Lorenzo; Yuan, Guo-Cheng

    2016-08-19

    Single-cell gene expression data provide invaluable resources for systematic characterization of cellular hierarchy in multi-cellular organisms. However, cell lineage reconstruction is still often associated with significant uncertainty due to technological constraints. Such uncertainties have not been taken into account in current methods. We present ECLAIR (Ensemble Cell Lineage Analysis with Improved Robustness), a novel computational method for the statistical inference of cell lineage relationships from single-cell gene expression data. ECLAIR uses an ensemble approach to improve the robustness of lineage predictions, and provides a quantitative estimate of the uncertainty of lineage branchings. We show that the application of ECLAIR to published datasets successfully reconstructs known lineage relationships and significantly improves the robustness of predictions. ECLAIR is a powerful bioinformatics tool for single-cell data analysis. It can be used for robust lineage reconstruction with quantitative estimate of prediction accuracy. PMID:27207878

  10. Comparing the accuracy of quantitative versus qualitative analyses of interim PET to prognosticate Hodgkin lymphoma: a systematic review protocol of diagnostic test accuracy

    PubMed Central

    Procházka, Vít; Klugar, Miloslav; Bachanova, Veronika; Klugarová, Jitka; Tučková, Dagmar; Papajík, Tomáš

    2016-01-01

    Introduction Hodgkin lymphoma is an effectively treated malignancy, yet 20% of patients relapse or are refractory to front-line treatments with potentially fatal outcomes. Early detection of poor treatment responders is crucial for appropriate application of tailored treatment strategies. Tumour metabolic imaging of Hodgkin lymphoma using visual (qualitative) 18-fluorodeoxyglucose positron emission tomography (FDG-PET) is a gold standard for staging and final outcome assessment, but results gathered during the interim period are less accurate. Analysis of continuous metabolic–morphological data (quantitative) FDG-PET may enhance the robustness of interim disease monitoring, and help to improve treatment decision-making processes. The objective of this review is to compare diagnostic test accuracy of quantitative versus qualitative interim FDG-PET in the prognostication of patients with Hodgkin lymphoma. Methods The literature on this topic will be reviewed in a 3-step strategy that follows methods described by the Joanna Briggs Institute (JBI). First, MEDLINE and EMBASE databases will be searched. Second, listed databases for published literature (MEDLINE, Tripdatabase, Pedro, EMBASE, the Cochrane Central Register of Controlled Trials and WoS) and unpublished literature (Open Grey, Current Controlled Trials, MedNar, ClinicalTrials.gov, Cos Conference Papers Index and International Clinical Trials Registry Platform of the WHO) will be queried. Third, 2 independent reviewers will analyse titles, abstracts and full texts, and perform hand search of relevant studies, and then perform critical appraisal and data extraction from selected studies using the DATARI tool (JBI). If possible, a statistical meta-analysis will be performed on pooled sensitivity and specificity data gathered from the selected studies. Statistical heterogeneity will be assessed. Funnel plots, Begg's rank correlations and Egger's regression tests will be used to detect and/or correct publication

  11. Robust satisficing and the probability of survival

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2014-01-01

    Concepts of robustness are sometimes employed when decisions under uncertainty are made without probabilistic information. We present a theorem that establishes necessary and sufficient conditions for non-probabilistic robustness to be equivalent to the probability of satisfying the specified outcome requirements. When this holds, probability is enhanced (or maximised) by enhancing (or maximising) robustness. Two further theorems establish important special cases. These theorems have implications for success or survival under uncertainty. Applications to foraging and finance are discussed.

  12. Robust Fixed-Structure Controller Synthesis

    NASA Technical Reports Server (NTRS)

    Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)

    2000-01-01

    The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.

  13. Robust Hypothesis Testing with alpha -Divergence

    NASA Astrophysics Data System (ADS)

    Gul, Gokhan; Zoubir, Abdelhak M.

    2016-09-01

    A robust minimax test for two composite hypotheses, which are determined by the neighborhoods of two nominal distributions with respect to a set of distances - called $\\alpha-$divergence distances, is proposed. Sion's minimax theorem is adopted to characterize the saddle value condition. Least favorable distributions, the robust decision rule and the robust likelihood ratio test are derived. If the nominal probability distributions satisfy a symmetry condition, the design procedure is shown to be simplified considerably. The parameters controlling the degree of robustness are bounded from above and the bounds are shown to be resulting from a solution of a set of equations. The simulations performed evaluate and exemplify the theoretical derivations.

  14. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  15. Robust Derivation of Risk Reduction Strategies

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Port, Daniel; Feather, Martin

    2007-01-01

    Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.

  16. Robust fluidic connections to freestanding microfluidic hydrogels

    PubMed Central

    Baer, Bradly B.; Larsen, Taylor S. H.

    2015-01-01

    Biomimetic scaffolds approaching physiological scale, whose size and large cellular load far exceed the limits of diffusion, require incorporation of a fluidic means to achieve adequate nutrient/metabolite exchange. This need has driven the extension of microfluidic technologies into the area of biomaterials. While construction of perfusable scaffolds is essentially a problem of microfluidic device fabrication, functional implementation of free-standing, thick-tissue constructs depends upon successful integration of external pumping mechanisms through optimized connective assemblies. However, a critical analysis to identify optimal materials/assembly components for hydrogel substrates has received little focus to date. This investigation addresses this issue directly by evaluating the efficacy of a range of adhesive and mechanical fluidic connection methods to gelatin hydrogel constructs based upon both mechanical property analysis and cell compatibility. Results identify a novel bioadhesive, comprised of two enzymatically modified gelatin compounds, for connecting tubing to hydrogel constructs that is both structurally robust and non-cytotoxic. Furthermore, outcomes from this study provide clear evidence that fluidic interconnect success varies with substrate composition (specifically hydrogel versus polydimethylsiloxane), highlighting not only the importance of selecting the appropriately tailored components for fluidic hydrogel systems but also that of encouraging ongoing, targeted exploration of this issue. The optimization of such interconnect systems will ultimately promote exciting scientific and therapeutic developments provided by microfluidic, cell-laden scaffolds. PMID:26045731

  17. Robustness evaluation of transactional audio watermarking systems

    NASA Astrophysics Data System (ADS)

    Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg

    2003-06-01

    Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.

  18. Numerical robust stability estimation in milling process

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhu, Limin; Ding, Han; Xiong, Youlun

    2012-09-01

    The conventional prediction of milling stability has been extensively studied based on the assumptions that the milling process dynamics is time invariant. However, nominal cutting parameters cannot guarantee the stability of milling process at the shop floor level since there exists many uncertain factors in a practical manufacturing environment. This paper proposes a novel numerical method to estimate the upper and lower bounds of Lobe diagram, which is used to predict the milling stability in a robust way by taking into account the uncertain parameters of milling system. Time finite element method, a milling stability theory is adopted as the conventional deterministic model. The uncertain dynamics parameters are dealt with by the non-probabilistic model in which the parameters with uncertainties are assumed to be bounded and there is no need for probabilistic distribution densities functions. By doing so, interval instead of deterministic stability Lobe is obtained, which guarantees the stability of milling process in an uncertain milling environment. In the simulations, the upper and lower bounds of Lobe diagram obtained by the changes of modal parameters of spindle-tool system and cutting coefficients are given, respectively. The simulation results show that the proposed method is effective and can obtain satisfying bounds of Lobe diagrams. The proposed method is helpful for researchers at shop floor to making decision on machining parameters selection.

  19. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  20. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  1. Accuracy of numerically produced compensators.

    PubMed

    Thompson, H; Evans, M D; Fallone, B G

    1999-01-01

    A feasibility study is performed to assess the utility of a computer numerically controlled (CNC) mill to produce compensating filters for conventional clinical use and for the delivery of intensity-modulated beams. A computer aided machining (CAM) software is used to assist in the design and construction of such filters. Geometric measurements of stepped and wedged surfaces are made to examine the accuracy of surface milling. Molds are milled and filled with molten alloy to produce filters, and both the molds and filters are examined for consistency and accuracy. Results show that the deviation of the filter surfaces from design does not exceed 1.5%. The effective attenuation coefficient is measured for CadFree, a cadmium-free alloy, in a 6 MV photon beam. The effective attenuation coefficients at the depth of maximum dose (1.5 cm) and at 10 cm in solid water phantom are found to be 0.546 cm-1 and 0.522 cm-1, respectively. Further attenuation measurements are made with Cerrobend to assess the variations of the effective attenuation coefficient with field size and source-surface distance. The ability of the CNC mill to accurately produce surfaces is verified with dose profile measurements in a 6 MV photon beam. The test phantom is composed of a 10 degrees polystyrene wedge and a 30 degrees polystyrene wedge, presenting both a sharp discontinuity and sloped surfaces. Dose profiles, measured at the depth of compensation (10 cm) beneath the test phantom and beneath a flat phantom, are compared to those produced by a commercial treatment planning system. Agreement between measured and predicted profiles is within 2%, indicating the viability of the system for filter production. PMID:10100166

  2. On the Robustness Properties of M-MRAC

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram

    2012-01-01

    The paper presents performance and robustness analysis of the modified reference model MRAC (model reference adaptive control) or M-MRAC in short, which differs from the conventional MRAC systems by feeding back the tracking error to the reference model. The tracking error feedback gain in concert with the adaptation rate provides an additional capability to regulate not only the transient performance of the tracking error, but also the transient performance of the control signal. This differs from the conventional MRAC systems, in which we have only the adaptation rate as a tool to regulate just the transient performance of the tracking error. It is shown that the selection of the feedback gain and the adaptation rate resolves the tradeoff between the robustness and performance in the sense that the increase in the feedback gain improves the behavior of the adaptive control signal, hence improves the systems robustness to time delays (or unmodeled dynamics), while increasing the adaptation rate improves the tracking performance or systems robustness to parametric uncertainties and external disturbances.

  3. A robust model-based approach to diagnosing faults in air-handling units

    SciTech Connect

    Ngo, D.; Dexter, A.L.

    1999-07-01

    This paper describes the development of a robust model-based approach to diagnosing faults in air-handling units that avoids false alarms caused by sensor bias but does not require application-dependent thresholds to be selected. The diagnosis is based on a semi-qualitative analysis of the measured data using generic fuzzy reference models to describe the behavior of the equipment, with and without faults. The scheme is applied to the cooling-coil subsystem of an air-handling unit, and the sensitivity of the diagnosis to sensor bias and fault size is examined. The results of the diagnosis are compared to those obtained using reference models that describe the behavior of a specific design. The scheme is also used to commission the cooling-coil subsystem of an air-handling unit in an office building. Results are presented that demonstrate the proposed scheme does not generate false alarms in practice. It is concluded that the accuracy of sensors currently used means it is likely that only large faults can be detected in practice and that more accurate measurements are required if a higher level of fault sensitivity is needed.

  4. Correlation analysis for long time series by robustly estimated autoregressive stochastic processes

    NASA Astrophysics Data System (ADS)

    Schuh, Wolf-Dieter; Brockmann, Jan-Martin; Kargoll, Boris

    2015-04-01

    Modern sensors and satellite missions deliver huge data sets and long time series of observations. These data sets have to be handled with care because of changing correlations, conspicuous data and possible outliers. Tailored concepts for data selection and robust techniques to estimate the correlation characteristics allow for a better/optimal exploitation of the information of these measurements. In this presentation we give an overview of standard techniques for estimating correlations occurring in long time series in the time domain as well as in the frequency domain. We discuss the pros and cons especially with the focus on the intensified occurrence of conspicuous data and outliers. We present a concept to classify the measurements and isolate conspicuous data. We propose to describe the varying correlation behavior of the measurement series by an autoregressive stochastic process and give some hints how to construct adaptive filters to decorrelate the measurement series and to handle the huge covariance matrices. As study object we use time series from gravity gradient data collected during the GOCE low orbit operation campaign (LOOC). Due to the low orbit these data from 13-Jun-2014 to 21-Oct-2014 have more or less the same potential to recover the Earth gravity field with the same accuracy than all the data from the rest of the entire mission. Therefore these data are extraordinarily valuable but hard to handle, because of conspicuous data due to maneuvers during the orbit lowering phases, overall increase in drag, saturation of ion thrusters and other (currently) unexplained effects.

  5. Robust Quantum-Based Interatomic Potentials for Multiscale Modeling in Transition Metals

    SciTech Connect

    Moriarty, J A; Benedict, L X; Glosli, J N; Hood, R Q; Orlikowski, D A; Patel, M V; Soderlind, P; Streitz, F H; Tang, M; Yang, L H

    2005-09-27

    First-principles generalized pseudopotential theory (GPT) provides a fundamental basis for transferable multi-ion interatomic potentials in transition metals and alloys within density-functional quantum mechanics. In the central bcc metals, where multi-ion angular forces are important to materials properties, simplified model GPT or MGPT potentials have been developed based on canonical d bands to allow analytic forms and large-scale atomistic simulations. Robust, advanced-generation MGPT potentials have now been obtained for Ta and Mo and successfully applied to a wide range of structural, thermodynamic, defect and mechanical properties at both ambient and extreme conditions. Selected applications to multiscale modeling discussed here include dislocation core structure and mobility, atomistically informed dislocation dynamics simulations of plasticity, and thermoelasticity and high-pressure strength modeling. Recent algorithm improvements have provided a more general matrix representation of MGPT beyond canonical bands, allowing improved accuracy and extension to f-electron actinide metals, an order of magnitude increase in computational speed for dynamic simulations, and the development of temperature-dependent potentials.

  6. Robust Quantum-Based Interatomic Potentials for Multiscale Modeling in Transition Metals

    SciTech Connect

    Moriarty, J A; Benedict, L X; Glosli, J N; Hood, R Q; Orlikowski, D A; Patel, M V; Soderlind, P; Streitz, F H; Tang, M; Yang, L H

    2005-03-25

    First-principles generalized pseudopotential theory (GPT) provides a fundamental basis for transferable multi-ion interatomic potentials in transition metals and alloys within density-functional quantum mechanics. In central bcc transition metals, where multi-ion angular forces are important to structural properties, simplified model GPT or MGPT potentials have been developed based on canonical d bands to allow analytic forms and large-scale atomistic simulations. Robust, advanced-generation MGPT potentials have now been obtained for Ta and Mo and successfully applied to a wide range of structural, thermodynamic, defect and mechanical properties at both ambient and extreme conditions. Selected applications to multiscale modeling discussed here include dislocation core structure and mobility, atomistically informed dislocation dynamics simulations of plasticity, and thermoelasticity and high-pressure strength modeling. Recent algorithm improvements have provided a more general matrix representation of MGPT beyond canonical bands, allowing improved accuracy and extension to f-electron actinide metals, an order of magnitude increase in computational speed for dynamic simulations, and the still-in-progress development of temperature-dependent potentials.

  7. A Robust Deep Model for Improved Classification of AD/MCI Patients

    PubMed Central

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-01-01

    Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  8. A Robust Deep Model for Improved Classification of AD/MCI Patients.

    PubMed

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-09-01

    Accurate classification of Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of a particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight coadaptation, which is a typical cause of overfitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multitask learning strategy into the deep learning framework. We applied the proposed method to the ADNI dataset, and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  9. On the Interplay between the Evolvability and Network Robustness in an Evolutionary Biological Network: A Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2011-01-01

    In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563

  10. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  11. Optimal H∞ robust output feedback control for satellite formation in arbitrary elliptical reference orbits

    NASA Astrophysics Data System (ADS)

    Wei, Changzhu; Park, Sang-Young; Park, Chandeok

    2014-09-01

    A two degree-of-freedom signal-based optimal H∞ robust output feedback controller is designed for satellite formation in an arbitrary elliptical reference orbit. Based on high-fidelity linearized dynamics of relative motion, uncertainties introduced by non-zero eccentricity and gravitational J2 perturbation are separated to construct a robust control model. Furthermore, a distributed robust control model is derived by modifying the perturbed robust control model of each satellite with the eigenvalues of the Laplacian matrix of the communication graph, which represent uncertainty in the communication topology. A signal-based optimal H∞ robust controller is then designed primarily. Considering that the uncertainties involved in the distributed robust control model have a completely diagonal structure, the corresponding analyses are made through structured singular value theory to reduce the conservativeness. Based on simulation results, further designs including increasing the degrees of freedom of the controller, modifying the performance and control weighted functions, adding a post high-pass filter according to the dynamic characteristics, and reducing the control model are made to improve the control performance. Nonlinear simulations demonstrate that the resultant optimal H∞ robust output feedback controller satisfies the robust performance requirements under uncertainties caused by non-zero eccentricity, J2 perturbation, and varying communication topology, and that 5 m accuracy in terms of stable desired formation configuration can be achieved by the presented optimal H∞ robust controller. In addition to considering the widely discussed uncertainties caused by the orbit of each satellite in a formation, the optimal H∞ robust output feedback control model presented in the current work considers the uncertainties caused by varying communication topology in the satellite formation that works in a cooperative way. Other new improvements include adopting a

  12. Southeast Asia: `A robust market`

    SciTech Connect

    Pagano, S.S.

    1997-04-01

    Southeast Asia is emerging as a robust market for exploration and field development activities. While much of the worldwide attention is focused on lucrative deep water drilling and production in the U.S. Gulf of Mexico, Brazil, and West Africa, the burgeoning Pacific Rim region is very much in the spotlight. As the industry approaches the next century. Southeast Asia is a key growth area that will be the focus of extensive drilling and development. Regional licensing activity is buoyant as oil and gas companies continue to express interest in Southeast Asian opportunities. During 1996, about 75 new license awards were granted. This year, at least an equal number of licenses likely will be awarded to international major and independent oil companies. In the past five years, the number of production-sharing contracts and concessions awarded declined slightly as oil companies apparently opted to invest in other foreign markets. Brunei government officials plan to open offshore areas to licensing in 1997, including what may prove to be attractive deep water areas. Indonesia`s state oil company Pertamina will offer 26 offshore tracts under production-sharing and technical assistance contracts this year. Malaysia expects to attract international interest in some 30 blocks it will soon offer under production-sharing terms. Bangladesh expects to call for tenders for an unspecified number of concessions later this year. Nearby, bids were submitted earlier this year to the Australian government for rights to explore 38 offshore areas. Results are expected to be announced by mid-year.

  13. Noise and Robustness in Phyllotaxis

    PubMed Central

    Mirabet, Vincent; Besnard, Fabrice; Vernoux, Teva; Boudaoud, Arezki

    2012-01-01

    A striking feature of vascular plants is the regular arrangement of lateral organs on the stem, known as phyllotaxis. The most common phyllotactic patterns can be described using spirals, numbers from the Fibonacci sequence and the golden angle. This rich mathematical structure, along with the experimental reproduction of phyllotactic spirals in physical systems, has led to a view of phyllotaxis focusing on regularity. However all organisms are affected by natural stochastic variability, raising questions about the effect of this variability on phyllotaxis and the achievement of such regular patterns. Here we address these questions theoretically using a dynamical system of interacting sources of inhibitory field. Previous work has shown that phyllotaxis can emerge deterministically from the self-organization of such sources and that inhibition is primarily mediated by the depletion of the plant hormone auxin through polarized transport. We incorporated stochasticity in the model and found three main classes of defects in spiral phyllotaxis – the reversal of the handedness of spirals, the concomitant initiation of organs and the occurrence of distichous angles – and we investigated whether a secondary inhibitory field filters out defects. Our results are consistent with available experimental data and yield a prediction of the main source of stochasticity during organogenesis. Our model can be related to cellular parameters and thus provides a framework for the analysis of phyllotactic mutants at both cellular and tissular levels. We propose that secondary fields associated with organogenesis, such as other biochemical signals or mechanical forces, are important for the robustness of phyllotaxis. More generally, our work sheds light on how a target pattern can be achieved within a noisy background. PMID:22359496

  14. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1988-01-01

    Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.

  15. Design for robustness of unique, multi-component engineering systems

    NASA Astrophysics Data System (ADS)

    Shelton, Kenneth A.

    2007-12-01

    design concept. These allele values are unitless themselves, but map to both configuration descriptions and attribute values. The Value Distance and Component Distance are metrics that measure the relative differences between two design concepts using the allele values, and all differences in a population of design concepts are calculated relative to a reference design, called the "base design". The base design is the top-ranked member of the population in weighted terms of robustness and performance. Robustness is determined based on the change in multi-objective performance as Value Distance and Component Distance (and thus differences in design) increases. It is assessed as acceptable if differences in design configurations up to specified tolerances result in performance changes that remain within a specified performance range. The design configuration difference tolerances and performance range together define the designer's risk management preferences for the final design concepts. Additionally, a complementary visualization capability was developed, called the "Design Solution Topography". This concept allows the visualization of a population of design concepts, and is a 3-axis plot where each point represents an entire design concept. The axes are the Value Distance, Component Distance and Performance Objective. The key benefit of the Design Solution Topography is that it allows the designer to visually identify and interpret the overall robustness of the current population of design concepts for a particular performance objective. In a multi-objective problem, each performance objective has its own Design Solution Topography view. These new concepts are implemented in an evolutionary computation-based conceptual designing method called the "Design for Robustness Method" that produces robust design concepts. The design procedures associated with this method enable designers to evaluate and ensure robustness in selected designs that also perform within a desired

  16. Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.

    PubMed

    Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun

    2016-07-11

    A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°. PMID:27410853

  17. Development of a robust, sensitive and selective liquid chromatography-tandem mass spectrometry assay for the quantification of the novel macrocyclic peptide kappa opioid receptor antagonist [D-Trp]CJ-15,208 in plasma and application to an initial pharmacokinetic study.

    PubMed

    Khaliq, Tanvir; Williams, Todd D; Senadheera, Sanjeewa N; Aldrich, Jane V

    2016-08-15

    Selective kappa opioid receptor (KOR) antagonists may have therapeutic potential as treatments for substance abuse and mood disorders. Since [D-Trp]CJ-15,208 (cyclo[Phe-d-Pro-Phe-d-Trp]) is a novel potent KOR antagonist in vivo, it is imperative to evaluate its pharmacokinetic properties to assist the development of analogs as potential therapeutic agents, necessitating the development and validation of a quantitative method for determining its plasma levels. A method for quantifying [D-Trp]CJ-15,208 was developed employing high performance liquid chromatography-tandem mass spectrometry in mouse plasma. Sample preparation was accomplished through a simple one-step protein precipitation method with acetonitrile, and [D-Trp]CJ-15,208 analyzed following HPLC separation on a Hypersil BDS C8 column. Multiple reaction monitoring (MRM), based on the transitions m/z 578.1→217.1 and 245.0, was specific for [D-Trp]CJ-15,208, and MRM based on the transition m/z 566.2→232.9 was specific for the internal standard without interference from endogenous substances in blank mouse plasma. The assay was linear over the concentration range 0.5-500ng/mL with a mean r(2)=0.9987. The mean inter-day accuracy and precision for all calibration standards were 93-118% and 8.9%, respectively. The absolute recoveries were 85±6% and 81±9% for [D-Trp]CJ-15,208 and the internal standard, respectively. The analytical method had excellent sensitivity with a lower limit of quantification of 0.5ng/mL using a sample volume of 20μL. The method was successfully applied to an initial pharmacokinetic study of [D-Trp]CJ-15,208 following intravenous administration to mice. PMID:27318293

  18. Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.

    2015-02-01

    This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.

  19. Normal human CD4(+) helper T cells express Kv1.1 voltage-gated K(+) channels, and selective Kv1.1 block in T cells induces by itself robust TNFα production and secretion and activation of the NFκB non-canonical pathway.

    PubMed

    Fellerhoff-Losch, Barbara; Korol, Sergiy V; Ganor, Yonatan; Gu, Songhai; Cooper, Itzik; Eilam, Raya; Besser, Michal; Goldfinger, Meidan; Chowers, Yehuda; Wank, Rudolf; Birnir, Bryndis; Levite, Mia

    2016-03-01

    TNFα is a very potent and pleiotropic pro-inflammatory cytokine, essential to the immune system for eradicating cancer and microorganisms, and to the nervous system, for brain development and ongoing function. Yet, excess and/or chronic TNFα secretion causes massive tissue damage in autoimmune, inflammatory and neurological diseases and injuries. Therefore, many patients with autoimmune/inflammatory diseases receive anti-TNFα medications. TNFα is secreted primarily by CD4(+) T cells, macrophages, monocytes, neutrophils and NK cells, mainly after immune stimulation. Yet, the cause for the pathologically high and chronic TNFα secretion is unknown. Can blocking of a particular ion channel in T cells induce by itself TNFα secretion? Such phenomenon was never revealed or even hypothesized. In this interdisciplinary study we discovered that: (1) normal human T cells express Kv1.1 voltage-gated potassium channel mRNA, and the Kv1.1 membrane-anchored protein channel; (2) Kv1.1 is expressed in most CD4(+)CD3(+) helper T cells (mean CD4(+)CD3(+)Kv1.1(+) T cells of 7 healthy subjects: 53.09 ± 22.17 %), but not in CD8(+)CD3(+) cytotoxic T cells (mean CD8(+)CD3(+)Kv1.1(+) T cells: 4.12 ± 3.04 %); (3) electrophysiological whole-cell recordings in normal human T cells revealed Kv currents; (4) Dendrotoxin-K (DTX-K), a highly selective Kv1.1 blocker derived from snake toxin, increases the rate of rise and decay of Kv currents in both resting and activated T cells, without affecting the peak current; (5) DTX-K by itself induces robust TNFα production and secretion by normal human T cells, without elevating IFNγ, IL-4 and IL-10; (6) intact Ca(2+) channels are required for DTX-induced TNFα secretion; (7) selective anti-Kv1.1 antibodies also induce by themselves TNFα secretion; (8) DTX-K activates NFκB in normal human T cells via the unique non-canonical-pathway; (9) injection of Kv1.1-blocked human T cells to SCID mice, causes recruitment of resident mouse cells

  20. Food Label Accuracy of Common Snack Foods

    PubMed Central

    Jumpertz, Reiner; Venti, Colleen A; Le, Duc Son; Michaels, Jennifer; Parrington, Shannon; Krakoff, Jonathan; Votruba, Susanne

    2012-01-01

    Nutrition labels have raised awareness of the energetic value of foods, and represent for many a pivotal guideline to regulate food intake. However, recent data have created doubts on label accuracy. Therefore we tested label accuracy for energy and macronutrient content of prepackaged energy-dense snack food products. We measured “true” caloric content of 24 popular snack food products in the U.S. and determined macronutrient content in 10 selected items. Bomb calorimetry and food factors were used to estimate energy content. Macronutrient content was determined according to Official Methods of Analysis. Calorimetric measurements were performed in our metabolic laboratory between April 20th and May 18th and macronutrient content was measured between September 28th and October 7th of 2010. Serving size, by weight, exceeded label statements by 1.2% [median] (25th percentile −1.4, 75th percentile 4.3, p=0.10). When differences in serving size were accounted for, metabolizable calories were 6.8 kcal (0.5, 23.5, p=0.0003) or 4.3% (0.2, 13.7, p=0.001) higher than the label statement. In a small convenience sample of the tested snack foods, carbohydrate content exceeded label statements by 7.7% (0.8, 16.7, p=0.01); however fat and protein content were not significantly different from label statements (−12.8% [−38.6, 9.6], p=0.23; 6.1% [−6.1, 17.5], p=0.32). Carbohydrate content explained 40% and serving size an additional 55% of the excess calories. Among a convenience sample of energy-dense snack foods, caloric content is higher than stated on the nutrition labels, but overall well within FDA limits. This discrepancy may be explained by inaccurate carbohydrate content and serving size. PMID:23505182

  1. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  2. Preschoolers Monitor the Relative Accuracy of Informants

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Koenig, Melissa; Harris, Paul L.

    2007-01-01

    In 2 studies, the sensitivity of 3- and 4-year-olds to the previous accuracy of informants was assessed. Children viewed films in which 2 informants labeled familiar objects with differential accuracy (across the 2 experiments, children were exposed to the following rates of accuracy by the more and less accurate informants, respectively: 100% vs.…

  3. The Utility of Robust Means in Statistics

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…

  4. Robust Estimates of Location and Dispersion.

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    This paper gives concise descriptions of a robust location statistic, the remedian of P. Rousseeuw and G. Bassett (1990) and a robust measure of dispersion, the "Sn" of P. Rousseeuw and C. Croux (1993). The use of Sn in least absolute errors regression (L1) is discussed, and BASIC programs for both statistics are provided. The remedian is an…

  5. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane. PMID:26513777

  6. Environmental change makes robust ecological networks fragile

    USGS Publications Warehouse

    Strona, Giovanni; Lafferty, Kevin D.

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers’ tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host–parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems.

  7. Evaluating efficiency and robustness in cilia design

    NASA Astrophysics Data System (ADS)

    Guo, Hanliang; Kanso, Eva

    2016-03-01

    Motile cilia are used by many eukaryotic cells to transport flow. Cilia-driven flows are important to many physiological functions, yet a deep understanding of the interplay between the mechanical structure of cilia and their physiological functions in healthy and diseased conditions remains elusive. To develop such an understanding, one needs a quantitative framework to assess cilia performance and robustness when subject to perturbations in the cilia apparatus. Here we link cilia design (beating patterns) to function (flow transport) in the context of experimentally and theoretically derived cilia models. We particularly examine the optimality and robustness of cilia design. Optimality refers to efficiency of flow transport, while robustness is defined as low sensitivity to variations in the design parameters. We find that suboptimal designs can be more robust than optimal ones. That is, designing for the most efficient cilium does not guarantee robustness. These findings have significant implications on the understanding of cilia design in artificial and biological systems.

  8. Evaluating efficiency and robustness in cilia design.

    PubMed

    Guo, Hanliang; Kanso, Eva

    2016-03-01

    Motile cilia are used by many eukaryotic cells to transport flow. Cilia-driven flows are important to many physiological functions, yet a deep understanding of the interplay between the mechanical structure of cilia and their physiological functions in healthy and diseased conditions remains elusive. To develop such an understanding, one needs a quantitative framework to assess cilia performance and robustness when subject to perturbations in the cilia apparatus. Here we link cilia design (beating patterns) to function (flow transport) in the context of experimentally and theoretically derived cilia models. We particularly examine the optimality and robustness of cilia design. Optimality refers to efficiency of flow transport, while robustness is defined as low sensitivity to variations in the design parameters. We find that suboptimal designs can be more robust than optimal ones. That is, designing for the most efficient cilium does not guarantee robustness. These findings have significant implications on the understanding of cilia design in artificial and biological systems. PMID:27078459

  9. Environmental change makes robust ecological networks fragile

    PubMed Central

    Strona, Giovanni; Lafferty, Kevin D.

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers' tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host–parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems. PMID:27511722

  10. Environmental change makes robust ecological networks fragile.

    PubMed

    Strona, Giovanni; Lafferty, Kevin D

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers' tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host-parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems. PMID:27511722

  11. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  12. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    PubMed Central

    Prieto, José Carlos; Croux, Christophe; Jiménez, Antonio Ramón

    2009-01-01

    A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered) in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1) a high breakdown point algorithm to obtain a clean sample, and 2) a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries) and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases. PMID:22408522

  13. Accuracy considerations in the computational analysis of jet noise

    NASA Technical Reports Server (NTRS)

    Scott, James N.

    1993-01-01

    The application of computational fluid dynamics methods to the analysis of problems in aerodynamic noise has resulted in the extension and adaptation of conventional CFD to the discipline now referred to as computational aeroacoustics (CAA). In the analysis of jet noise accurate resolution of a wide range of spatial and temporal scales in the flow field is essential if the acoustic far field is to be predicted. The numerical simulation of unsteady jet flow has been successfully demonstrated and many flow features have been computed with reasonable accuracy. Grid refinement and increased solution time are discussed as means of improving accuracy of Navier-Stokes solutions of unsteady jet flow. In addition various properties of different numerical procedures which influence accuracy are examined with particular emphasis on dispersion and dissipation characteristics. These properties are investigated by using selected schemes to solve model problems for the propagation of a shock wave and a sinusoidal disturbance. The results are compared for the different schemes.

  14. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  15. Hierarchical feature selection for erythema severity estimation

    NASA Astrophysics Data System (ADS)

    Wang, Li; Shi, Chenbo; Shu, Chang

    2014-10-01

    At present PASI system of scoring is used for evaluating erythema severity, which can help doctors to diagnose psoriasis [1-3]. The system relies on the subjective judge of doctors, where the accuracy and stability cannot be guaranteed [4]. This paper proposes a stable and precise algorithm for erythema severity estimation. Our contributions are twofold. On one hand, in order to extract the multi-scale redness of erythema, we design the hierarchical feature. Different from traditional methods, we not only utilize the color statistical features, but also divide the detect window into small window and extract hierarchical features. Further, a feature re-ranking step is introduced, which can guarantee that extracted features are irrelevant to each other. On the other hand, an adaptive boosting classifier is applied for further feature selection. During the step of training, the classifier will seek out the most valuable feature for evaluating erythema severity, due to its strong learning ability. Experimental results demonstrate the high precision and robustness of our algorithm. The accuracy is 80.1% on the dataset which comprise 116 patients' images with various kinds of erythema. Now our system has been applied for erythema medical efficacy evaluation in Union Hosp, China.

  16. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  17. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  18. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  19. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  20. The adaptive accuracy of flowers: measurement and microevolutionary patterns

    PubMed Central

    Armbruster, W. Scott; Hansen, Thomas F.; Pélabon, Christophe; Pérez-Barrales, Rocío; Maad, Johanne

    2009-01-01

    for higher precision and accuracy in flowers with higher levels of integration and dichogamy (temporal separation of sexual functions), and in those that have pollinators that are immobile (or immobilized) during pollen transfer. Large deviations from putative adaptive optima were observed, and these may be related to the effects of conflicting selective pressures on flowers, such as selection against self-pollination promoting herkogamy (spatial separation of pollen and stigmas). Conclusions Adaptive accuracy is a useful concept for understanding the adaptive significance of phenotypic means and variances of floral morphology within and among populations and species. Estimating and comparing the various components of adaptive accuracy can be particularly helpful for identifying the causes of inaccuracy, such as conflicting selective pressures, low environmental canalization and developmental instability. PMID:19429671